ID
int64 1
16.8k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.59k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
16,701 | Symmetric Variational Autoencoder and Connections to Adversarial Learning | A new form of the variational autoencoder (VAE) is proposed, based on the
symmetric Kullback-Leibler divergence. It is demonstrated that learning of the
resulting symmetric VAE (sVAE) has close connections to previously developed
adversarial-learning methods. This relationship helps unify the previously
distinct techniques of VAE and adversarially learning, and provides insights
that allow us to ameliorate shortcomings with some previously developed
adversarial methods. In addition to an analysis that motivates and explains the
sVAE, an extensive set of experiments validate the utility of the approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,702 | Matrix Product Unitaries: Structure, Symmetries, and Topological Invariants | Matrix Product Vectors form the appropriate framework to study and classify
one-dimensional quantum systems. In this work, we develop the structure theory
of Matrix Product Unitary operators (MPUs) which appear e.g. in the description
of time evolutions of one-dimensional systems. We prove that all MPUs have a
strict causal cone, making them Quantum Cellular Automata (QCAs), and derive a
canonical form for MPUs which relates different MPU representations of the same
unitary through a local gauge. We use this canonical form to prove an Index
Theorem for MPUs which gives the precise conditions under which two MPUs are
adiabatically connected, providing an alternative derivation to that of
[Commun. Math. Phys. 310, 419 (2012), arXiv:0910.3675] for QCAs. We also
discuss the effect of symmetries on the MPU classification. In particular, we
characterize the tensors corresponding to MPU that are invariant under
conjugation, time reversal, or transposition. In the first case, we give a full
characterization of all equivalence classes. Finally, we give several examples
of MPU possessing different symmetries.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,703 | Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning | Deep Learning has recently become hugely popular in machine learning,
providing significant improvements in classification accuracy in the presence
of highly-structured and large databases.
Researchers have also considered privacy implications of deep learning.
Models are typically trained in a centralized manner with all the data being
processed by the same training algorithm. If the data is a collection of users'
private data, including habits, personal pictures, geographical positions,
interests, and more, the centralized server will have access to sensitive
information that could potentially be mishandled. To tackle this problem,
collaborative deep learning models have recently been proposed where parties
locally train their deep learning structures and only share a subset of the
parameters in the attempt to keep their respective training sets private.
Parameters can also be obfuscated via differential privacy (DP) to make
information extraction even more challenging, as proposed by Shokri and
Shmatikov at CCS'15.
Unfortunately, we show that any privacy-preserving collaborative deep
learning is susceptible to a powerful attack that we devise in this paper. In
particular, we show that a distributed, federated, or decentralized deep
learning approach is fundamentally broken and does not protect the training
sets of honest participants. The attack we developed exploits the real-time
nature of the learning process that allows the adversary to train a Generative
Adversarial Network (GAN) that generates prototypical samples of the targeted
training set that was meant to be private (the samples generated by the GAN are
intended to come from the same distribution as the training data).
Interestingly, we show that record-level DP applied to the shared parameters of
the model, as suggested in previous work, is ineffective (i.e., record-level DP
is not designed to address our attack).
| 1 | 0 | 0 | 1 | 0 | 0 |
16,704 | A. G. W. Cameron 1925-2005, Biographical Memoir, National Academy of Sciences | Alastair Graham Walker Cameron was an astrophysicist and planetary scientist
of broad interests and exceptional originality. A founder of the field of
nuclear astrophysics, he developed the theoretical understanding of the
chemical elementsâ origins and made pioneering connections between the
abundances of elements in meteorites to advance the theory that the Moon
originated from a giant impact with the young Earth by an object at least the
size of Mars. Cameron was an early and persistent exploiter of computer
technology in the theoretical study of complex astronomical systemsâincluding
nuclear reactions in supernovae, the structure of neutron stars, and planetary
collisions.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,705 | Realization of "Time Crystal" Lagrangians and Emergent Sisyphus Dynamics | We demonstrate how non-convex "time crystal" Lagrangians arise in the
effective description of conventional, realizable physical systems. Such
embeddings allow for the resolution of dynamical singularities that arise in
the reduced description. Sisyphus dynamics, featuring intervals of forward
motion interrupted by quick resets, is a generic consequence. Near the would-be
singularity of the time crystal, we find striking microstructure.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,706 | Gradient Normalization & Depth Based Decay For Deep Learning | In this paper we introduce a novel method of gradient normalization and decay
with respect to depth. Our method leverages the simple concept of normalizing
all gradients in a deep neural network, and then decaying said gradients with
respect to their depth in the network. Our proposed normalization and decay
techniques can be used in conjunction with most current state of the art
optimizers and are a very simple addition to any network. This method, although
simple, showed improvements in convergence time on state of the art networks
such as DenseNet and ResNet on image classification tasks, as well as on an
LSTM for natural language processing tasks.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,707 | CARET analysis of multithreaded programs | Dynamic Pushdown Networks (DPNs) are a natural model for multithreaded
programs with (recursive) procedure calls and thread creation. On the other
hand, CARET is a temporal logic that allows to write linear temporal formulas
while taking into account the matching between calls and returns. We consider
in this paper the model-checking problem of DPNs against CARET formulas. We
show that this problem can be effectively solved by a reduction to the
emptiness problem of Büchi Dynamic Pushdown Systems. We then show that CARET
model checking is also decidable for DPNs communicating with locks. Our results
can, in particular, be used for the detection of concurrent malware.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,708 | Sparse modeling approach to analytical continuation of imaginary-time quantum Monte Carlo data | A new approach of solving the ill-conditioned inverse problem for analytical
continuation is proposed. The root of the problem lies in the fact that even
tiny noise of imaginary-time input data has a serious impact on the inferred
real-frequency spectra. By means of a modern regularization technique, we
eliminate redundant degrees of freedom that essentially carry the noise,
leaving only relevant information unaffected by the noise. The resultant
spectrum is represented with minimal bases and thus a stable analytical
continuation is achieved. This framework further provides a tool for analyzing
to what extent the Monte Carlo data need to be accurate to resolve details of
an expected spectral function.
| 0 | 1 | 0 | 1 | 0 | 0 |
16,709 | Distral: Robust Multitask Reinforcement Learning | Most deep reinforcement learning algorithms are data inefficient in complex
and rich environments, limiting their applicability to many scenarios. One
direction for improving data efficiency is multitask learning with shared
neural network parameters, where efficiency may be improved through transfer
across related tasks. In practice, however, this is not usually observed,
because gradients from different tasks can interfere negatively, making
learning unstable and sometimes even less data efficient. Another issue is the
different reward schemes between tasks, which can easily lead to one task
dominating the learning of a shared model. We propose a new approach for joint
training of multiple tasks, which we refer to as Distral (Distill & transfer
learning). Instead of sharing parameters between the different workers, we
propose to share a "distilled" policy that captures common behaviour across
tasks. Each worker is trained to solve its own task while constrained to stay
close to the shared policy, while the shared policy is trained by distillation
to be the centroid of all task policies. Both aspects of the learning process
are derived by optimizing a joint objective function. We show that our approach
supports efficient transfer on complex 3D environments, outperforming several
related methods. Moreover, the proposed learning process is more robust and
more stable---attributes that are critical in deep reinforcement learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,710 | Finding the number density of atomic vapor by studying its absorption profile | We demonstrate a technique for obtaining the density of atomic vapor, by
doing a fit of the resonant absorption spectrum to a density-matrix model. In
order to demonstrate the usefulness of the technique, we apply it to absorption
in the ${\rm D_2}$ line of a Cs vapor cell at room temperature. The lineshape
of the spectrum is asymmetric due to the role of open transitions. This
asymmetry is explained in the model using transit-time relaxation as the atoms
traverse the laser beam. We also obtain the latent heat of evaporation by
studying the number density as a function of temperature close to room
temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,711 | Rigidity for von Neumann algebras given by locally compact groups and their crossed products | We prove the first rigidity and classification theorems for crossed product
von Neumann algebras given by actions of non-discrete, locally compact groups.
We prove that for arbitrary free probability measure preserving actions of
connected simple Lie groups of real rank one, the crossed product has a unique
Cartan subalgebra up to unitary conjugacy. We then deduce a W* strong rigidity
theorem for irreducible actions of products of such groups. More generally, our
results hold for products of locally compact groups that are nonamenable,
weakly amenable and that belong to Ozawa's class S.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,712 | Quantum Singwi-Tosi-Land-Sjoelander approach for interacting inhomogeneous systems under electromagnetic fields: Comparison with exact results | For inhomogeneous interacting electronic systems under a time-dependent
electromagnetic perturbation, we derive the linear equation for response
functions in a quantum mechanical manner. It is a natural extension of the
original semi-classical Singwi-Tosi-Land-Sjoelander (STLS) approach for an
electron gas. The factorization ansatz for the two-particle distribution is an
indispensable ingredient in the STLS approaches for determination of the
response function and the pair correlation function. In this study, we choose
an analytically solvable interacting two-electron system as the target for
which we examine the validity of the approximation. It is demonstrated that the
STLS response function reproduces well the exact one for low-energy
excitations. The interaction energy contributed from the STLS response function
is also discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,713 | 3D Convolutional Neural Networks for Brain Tumor Segmentation: A Comparison of Multi-resolution Architectures | This paper analyzes the use of 3D Convolutional Neural Networks for brain
tumor segmentation in MR images. We address the problem using three different
architectures that combine fine and coarse features to obtain the final
segmentation. We compare three different networks that use multi-resolution
features in terms of both design and performance and we show that they improve
their single-resolution counterparts.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,714 | Learning Random Fourier Features by Hybrid Constrained Optimization | The kernel embedding algorithm is an important component for adapting kernel
methods to large datasets. Since the algorithm consumes a major computation
cost in the testing phase, we propose a novel teacher-learner framework of
learning computation-efficient kernel embeddings from specific data. In the
framework, the high-precision embeddings (teacher) transfer the data
information to the computation-efficient kernel embeddings (learner). We
jointly select informative embedding functions and pursue an orthogonal
transformation between two embeddings. We propose a novel approach of
constrained variational expectation maximization (CVEM), where the alternate
direction method of multiplier (ADMM) is applied over a nonconvex domain in the
maximization step. We also propose two specific formulations based on the
prevalent Random Fourier Feature (RFF), the masked and blocked version of
Computation-Efficient RFF (CERF), by imposing a random binary mask or a block
structure on the transformation matrix. By empirical studies of several
applications on different real-world datasets, we demonstrate that the CERF
significantly improves the performance of kernel methods upon the RFF, under
certain arithmetic operation requirements, and suitable for structured matrix
multiplication in Fastfood type algorithms.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,715 | Min-max formulas and other properties of certain classes of nonconvex effective Hamiltonians | This paper is the first attempt to systematically study properties of the
effective Hamiltonian $\overline{H}$ arising in the periodic homogenization of
some coercive but nonconvex Hamilton-Jacobi equations. Firstly, we introduce a
new and robust decomposition method to obtain min-max formulas for a class of
nonconvex $\overline{H}$. Secondly, we analytically and numerically investigate
other related interesting phenomena, such as "quasi-convexification" and
breakdown of symmetry, of $\overline{H}$ from other typical nonconvex
Hamiltonians. Finally, in the appendix, we show that our new method and those a
priori formulas from the periodic setting can be used to obtain stochastic
homogenization for same class of nonconvex Hamilton-Jacobi equations. Some
conjectures and problems are also proposed.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,716 | The Prescribed Ricci Curvature Problem on Homogeneous Spaces with Intermediate Subgroups | Consider a compact Lie group $G$ and a closed subgroup $H<G$. Suppose
$\mathcal M$ is the set of $G$-invariant Riemannian metrics on the homogeneous
space $M=G/H$. We obtain a sufficient condition for the existence of
$g\in\mathcal M$ and $c>0$ such that the Ricci curvature of $g$ equals $cT$ for
a given $T\in\mathcal M$. This condition is also necessary if the isotropy
representation of $M$ splits into two inequivalent irreducible summands.
Immediate and potential applications include new existence results for Ricci
iterations.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,717 | Solving Boundary Value Problem for a Nonlinear Stationary Controllable System with Synthesizing Control | An algorithm for constructing a control function that transfers a wide class
of stationary nonlinear systems of ordinary differential equations from an
initial state to a final state under certain control restrictions is proposed.
The algorithm is designed to be convenient for numerical implementation. A
constructive criterion of the desired transfer possibility is presented. The
problem of an interorbital flight is considered as a test example and it is
simulated numerically with the presented method.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,718 | Modelling and prediction of financial trading networks: An application to the NYMEX natural gas futures market | Over the last few years there has been a growing interest in using financial
trading networks to understand the microstructure of financial markets. Most of
the methodologies developed so far for this purpose have been based on the
study of descriptive summaries of the networks such as the average node degree
and the clustering coefficient. In contrast, this paper develops novel
statistical methods for modeling sequences of financial trading networks. Our
approach uses a stochastic blockmodel to describe the structure of the network
during each period, and then links multiple time periods using a hidden Markov
model. This structure allows us to identify events that affect the structure of
the market and make accurate short-term prediction of future transactions. The
methodology is illustrated using data from the NYMEX natural gas futures market
from January 2005 to December 2008.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,719 | A comparative study of fairness-enhancing interventions in machine learning | Computers are increasingly used to make decisions that have significant
impact in people's lives. Often, these predictions can affect different
population subgroups disproportionately. As a result, the issue of fairness has
received much recent interest, and a number of fairness-enhanced classifiers
and predictors have appeared in the literature. This paper seeks to study the
following questions: how do these different techniques fundamentally compare to
one another, and what accounts for the differences? Specifically, we seek to
bring attention to many under-appreciated aspects of such fairness-enhancing
interventions. Concretely, we present the results of an open benchmark we have
developed that lets us compare a number of different algorithms under a variety
of fairness measures, and a large number of existing datasets. We find that
although different algorithms tend to prefer specific formulations of fairness
preservations, many of these measures strongly correlate with one another. In
addition, we find that fairness-preserving algorithms tend to be sensitive to
fluctuations in dataset composition (simulated in our benchmark by varying
training-test splits), indicating that fairness interventions might be more
brittle than previously thought.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,720 | Predicting Opioid Relapse Using Social Media Data | Opioid addiction is a severe public health threat in the U.S, causing massive
deaths and many social problems. Accurate relapse prediction is of practical
importance for recovering patients since relapse prediction promotes timely
relapse preventions that help patients stay clean. In this paper, we introduce
a Generative Adversarial Networks (GAN) model to predict the addiction relapses
based on sentiment images and social influences. Experimental results on real
social media data from Reddit.com demonstrate that the GAN model delivers a
better performance than comparable alternative techniques. The sentiment images
generated by the model show that relapse is closely connected with two emotions
`joy' and `negative'. This work is one of the first attempts to predict
relapses using massive social media data and generative adversarial nets. The
proposed method, combined with knowledge of social media mining, has the
potential to revolutionize the practice of opioid addiction prevention and
treatment.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,721 | Self-Trapping of G-Mode Oscillations in Relativistic Thin Disks, Revisited | We examine by a perturbation method how the self-trapping of g-mode
oscillations in geometrically thin relativistic disks is affected by uniform
vertical magnetic fields. Disks which we consider are isothermal in the
vertical direction, but are truncated at a certain height by presence of hot
coronae. We find that the characteristics of self-trapping of axisymmetric
g-mode oscillations in non-magnetized disks is kept unchanged in magnetized
disks at least till a strength of the fields, depending on vertical thickness
of disks. These magnetic fields become stronger as the disk becomes thinner.
This result suggests that trapped g-mode oscillations still remain as one of
possible candidates of quasi-periodic oscillations observed in black-hole and
neutron-star X-ray binaries in the cases where vertical magnetic fields in
disks are weak.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,722 | Computable geometric complex analysis and complex dynamics | We discuss computability and computational complexity of conformal mappings
and their boundary extensions. As applications, we review the state of the art
regarding computability and complexity of Julia sets, their invariant measures
and external rays impressions.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,723 | PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes | Estimating the 6D pose of known objects is important for robots to interact
with the real world. The problem is challenging due to the variety of objects
as well as the complexity of a scene caused by clutter and occlusions between
objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network
for 6D object pose estimation. PoseCNN estimates the 3D translation of an
object by localizing its center in the image and predicting its distance from
the camera. The 3D rotation of the object is estimated by regressing to a
quaternion representation. We also introduce a novel loss function that enables
PoseCNN to handle symmetric objects. In addition, we contribute a large scale
video dataset for 6D object pose estimation named the YCB-Video dataset. Our
dataset provides accurate 6D poses of 21 objects from the YCB dataset observed
in 92 videos with 133,827 frames. We conduct extensive experiments on our
YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is
highly robust to occlusions, can handle symmetric objects, and provide accurate
pose estimation using only color images as input. When using depth data to
further refine the poses, our approach achieves state-of-the-art results on the
challenging OccludedLINEMOD dataset. Our code and dataset are available at
this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,724 | MIP Formulations for the Steiner Forest Problem | The Steiner Forest problem is among the fundamental network design problems.
Finding tight linear programming bounds for the problem is the key for both
fast Branch-and-Bound algorithms and good primal-dual approximations. On the
theoretical side, the best known bound can be obtained from an integer program
[KLSv08]. It guarantees a value that is a (2-eps)-approximation of the integer
optimum. On the practical side, bounds from a mixed integer program by Magnanti
and Raghavan [MR05] are very close to the integer optimum in computational
experiments, but the size of the model limits its practical usefulness. We
compare a number of known integer programming formulations for the problem and
propose three new formulations. We can show that the bounds from our two new
cut-based formulations for the problem are within a factor of 2 of the integer
optimum. In our experiments, the formulations prove to be both tractable and
provide better bounds than all other tractable formulations. In particular, the
factor to the integer optimum is much better than 2 in the experiments.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,725 | Luminescence in germania-silica fibers in 1-2 μm region | We analyze the origins of the luminescence in germania-silica fibers with
high germanium concentration (about 30 mol. % GeO2) in the region 1-2 {\mu}m
with a laser pump at the wavelength 532 nm. We show that such fibers
demonstrate the high level of luminescence which unlikely allows the
observation of photon triplets, generated in a third-order spontaneous
parametric down-conversion process in such fibers. The only efficient approach
to the luminescence reduction is the hydrogen saturation of fiber samples,
however, even in this case the level of residual luminescence is still too high
for three-photon registration.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,726 | Boundedness in languages of infinite words | We define a new class of languages of $\omega$-words, strictly extending
$\omega$-regular languages.
One way to present this new class is by a type of regular expressions. The
new expressions are an extension of $\omega$-regular expressions where two new
variants of the Kleene star $L^*$ are added: $L^B$ and $L^S$. These new
exponents are used to say that parts of the input word have bounded size, and
that parts of the input can have arbitrarily large sizes, respectively. For
instance, the expression $(a^Bb)^\omega$ represents the language of infinite
words over the letters $a,b$ where there is a common bound on the number of
consecutive letters $a$. The expression $(a^Sb)^\omega$ represents a similar
language, but this time the distance between consecutive $b$'s is required to
tend toward the infinite.
We develop a theory for these languages, with a focus on decidability and
closure. We define an equivalent automaton model, extending Büchi automata.
The main technical result is a complementation lemma that works for languages
where only one type of exponent---either $L^B$ or $L^S$---is used.
We use the closure and decidability results to obtain partial decidability
results for the logic MSOLB, a logic obtained by extending monadic second-order
logic with new quantifiers that speak about the size of sets.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,727 | Maximum Regularized Likelihood Estimators: A General Prediction Theory and Applications | Maximum regularized likelihood estimators (MRLEs) are arguably the most
established class of estimators in high-dimensional statistics. In this paper,
we derive guarantees for MRLEs in Kullback-Leibler divergence, a general
measure of prediction accuracy. We assume only that the densities have a convex
parametrization and that the regularization is definite and positive
homogenous. The results thus apply to a very large variety of models and
estimators, such as tensor regression and graphical models with convex and
non-convex regularized methods. A main conclusion is that MRLEs are broadly
consistent in prediction - regardless of whether restricted eigenvalues or
similar conditions hold.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,728 | Emergent $\mathrm{SU}(4)$ Symmetry in $α$-ZrCl$_3$ and Crystalline Spin-Orbital Liquids | While the enhancement of the spin-space symmetry from the usual
$\mathrm{SU}(2)$ to $\mathrm{SU}(N)$ is promising for finding nontrivial
quantum spin liquids, its realization in magnetic materials remains
challenging. Here we propose a new mechanism by which the $\mathrm{SU}(4)$
symmetry emerges in the strong spin-orbit coupling limit. In $d^1$ transition
metal compounds with edge-sharing anion octahedra, the spin-orbit coupling
gives rise to strongly bond-dependent and apparently $\mathrm{SU}(4)$-breaking
hopping between the $J_\textrm{eff}=3/2$ quartets. However, in the honeycomb
structure, a gauge transformation maps the system to an
$\mathrm{SU}(4)$-symmetric Hubbard model. In the strong repulsion limit at
quarter filling, as realized in $\alpha$-ZrCl$_3,$ the low-energy effective
model is the $\mathrm{SU}(4)$ Heisenberg model on the honeycomb lattice, which
cannot have a trivial gapped ground state and is expected to host a gapless
spin-orbital liquid. By generalizing this model to other three-dimensional
lattices, we also propose crystalline spin-orbital liquids protected by this
emergent $\mathrm{SU}(4)$ symmetry and space group symmetries.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,729 | Central elements of the Jennings basis and certain Morita invariants | From Morita theoretic viewpoint, computing Morita invariants is important. We
prove that the intersection of the center and the $n$th (right) socle $ZS^n(A)
:= Z(A) \cap \operatorname{Soc}^n(A)$ of a finite-dimensional algebra $A$ is a
Morita invariant; This is a generalization of important Morita invariants ---
the center $Z(A)$ and the Reynolds ideal $ZS^1(A)$. As an example, we also
studied $ZS^n(FG)$ for the group algebra $FG$ of a finite $p$-group $G$ over a
field $F$ of positive characteristic $p$. Such an algebra has a basis along the
socle filtration, known as the Jennings basis. We prove certain elements of the
Jennings basis are central and hence form a linearly independent set of
$ZS^n(FG)$. In fact, such elements form a basis of $ZS^n(FG)$ for every integer
$1 \le n \le p$ if $G$ is powerful. As a corollary we have
$\operatorname{Soc}^p(FG) \subseteq Z(FG)$ if $G$ is powerful.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,730 | Linear Disentangled Representation Learning for Facial Actions | Limited annotated data available for the recognition of facial expression and
action units embarrasses the training of deep networks, which can learn
disentangled invariant features. However, a linear model with just several
parameters normally is not demanding in terms of training data. In this paper,
we propose an elegant linear model to untangle confounding factors in
challenging realistic multichannel signals such as 2D face videos. The simple
yet powerful model does not rely on huge training data and is natural for
recognizing facial actions without explicitly disentangling the identity. Base
on well-understood intuitive linear models such as Sparse Representation based
Classification (SRC), previous attempts require a prepossessing of explicit
decoupling which is practically inexact. Instead, we exploit the low-rank
property across frames to subtract the underlying neutral faces which are
modeled jointly with sparse representation on the action components with group
sparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot
automatic method on raw face videos performs as competitive as SRC applied on
manually prepared action components and performs even better than SRC in terms
of true positive rate. We apply the model to the even more challenging task of
facial action unit recognition, verified on the MPI Face Video Database
(MPI-VDB) achieving a decent performance. All the programs and data have been
made publicly available.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,731 | ChemGAN challenge for drug discovery: can AI reproduce natural chemical diversity? | Generating molecules with desired chemical properties is important for drug
discovery. The use of generative neural networks is promising for this task.
However, from visual inspection, it often appears that generated samples lack
diversity. In this paper, we quantify this internal chemical diversity, and we
raise the following challenge: can a nontrivial AI model reproduce natural
chemical diversity for desired molecules? To illustrate this question, we
consider two generative models: a Reinforcement Learning model and the recently
introduced ORGAN. Both fail at this challenge. We hope this challenge will
stimulate research in this direction.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,732 | Nodal domains, spectral minimal partitions, and their relation to Aharonov-Bohm operators | This survey is a short version of a chapter written by the first two authors
in the book [A. Henrot, editor. Shape optimization and spectral theory. Berlin:
De Gruyter, 2017] (where more details and references are given) but we have
decided here to put more emphasis on the role of the Aharonov-Bohm operators
which appear to be a useful tool coming from physics for understanding a
problem motivated either by spectral geometry or dynamics of population.
Similar questions appear also in Bose-Einstein theory. Finally some open
problems which might be of interest are mentioned.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,733 | Optimal control of two qubits via a single cavity drive in circuit quantum electrodynamics | Optimization of the fidelity of control operations is of critical importance
in the pursuit of fault-tolerant quantum computation. We apply optimal control
techniques to demonstrate that a single drive via the cavity in circuit quantum
electrodynamics can implement a high-fidelity two-qubit all-microwave gate that
directly entangles the qubits via the mutual qubit-cavity couplings. This is
performed by driving at one of the qubits' frequencies which generates a
conditional two-qubit gate, but will also generate other spurious interactions.
These optimal control techniques are used to find pulse shapes that can perform
this two-qubit gate with high fidelity, robust against errors in the system
parameters. The simulations were all performed using experimentally relevant
parameters and constraints.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,734 | Modular Representation of Layered Neural Networks | Layered neural networks have greatly improved the performance of various
applications including image processing, speech recognition, natural language
processing, and bioinformatics. However, it is still difficult to discover or
interpret knowledge from the inference provided by a layered neural network,
since its internal representation has many nonlinear and complex parameters
embedded in hierarchical layers. Therefore, it becomes important to establish a
new methodology by which layered neural networks can be understood.
In this paper, we propose a new method for extracting a global and simplified
structure from a layered neural network. Based on network analysis, the
proposed method detects communities or clusters of units with similar
connection patterns. We show its effectiveness by applying it to three use
cases. (1) Network decomposition: it can decompose a trained neural network
into multiple small independent networks thus dividing the problem and reducing
the computation time. (2) Training assessment: the appropriateness of a trained
result with a given hyperparameter or randomly chosen initial parameters can be
evaluated by using a modularity index. And (3) data analysis: in practical data
it reveals the community structure in the input, hidden, and output layers,
which serves as a clue for discovering knowledge from a trained neural network.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,735 | A Guide to General-Purpose Approximate Bayesian Computation Software | This Chapter, "A Guide to General-Purpose ABC Software", is to appear in the
forthcoming Handbook of Approximate Bayesian Computation (2018). We present
general-purpose software to perform Approximate Bayesian Computation (ABC) as
implemented in the R-packages abc and EasyABC and the c++ program ABCtoolbox.
With simple toy models we demonstrate how to perform parameter inference, model
selection, validation and optimal choice of summary statistics. We demonstrate
how to combine ABC with Markov Chain Monte Carlo and describe a realistic
population genetics application.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,736 | Feature Selection Facilitates Learning Mixtures of Discrete Product Distributions | Feature selection can facilitate the learning of mixtures of discrete random
variables as they arise, e.g. in crowdsourcing tasks. Intuitively, not all
workers are equally reliable but, if the less reliable ones could be
eliminated, then learning should be more robust. By analogy with Gaussian
mixture models, we seek a low-order statistical approach, and here introduce an
algorithm based on the (pairwise) mutual information. This induces an order
over workers that is well structured for the `one coin' model. More generally,
it is justified by a goodness-of-fit measure and is validated empirically.
Improvement in real data sets can be substantial.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,737 | 3D Human Pose Estimation on a Configurable Bed from a Pressure Image | Robots have the potential to assist people in bed, such as in healthcare
settings, yet bedding materials like sheets and blankets can make observation
of the human body difficult for robots. A pressure-sensing mat on a bed can
provide pressure images that are relatively insensitive to bedding materials.
However, prior work on estimating human pose from pressure images has been
restricted to 2D pose estimates and flat beds. In this work, we present two
convolutional neural networks to estimate the 3D joint positions of a person in
a configurable bed from a single pressure image. The first network directly
outputs 3D joint positions, while the second outputs a kinematic model that
includes estimated joint angles and limb lengths. We evaluated our networks on
data from 17 human participants with two bed configurations: supine and seated.
Our networks achieved a mean joint position error of 77 mm when tested with
data from people outside the training set, outperforming several baselines. We
also present a simple mechanical model that provides insight into ambiguity
associated with limbs raised off of the pressure mat, and demonstrate that
Monte Carlo dropout can be used to estimate pose confidence in these
situations. Finally, we provide a demonstration in which a mobile manipulator
uses our network's estimated kinematic model to reach a location on a person's
body in spite of the person being seated in a bed and covered by a blanket.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,738 | Multilingual Hierarchical Attention Networks for Document Classification | Hierarchical attention networks have recently achieved remarkable performance
for document classification in a given language. However, when multilingual
document collections are considered, training such models separately for each
language entails linear parameter growth and lack of cross-language transfer.
Learning a single multilingual model with fewer parameters is therefore a
challenging but potentially beneficial objective. To this end, we propose
multilingual hierarchical attention networks for learning document structures,
with shared encoders and/or shared attention mechanisms across languages, using
multi-task learning and an aligned semantic space as input. We evaluate the
proposed models on multilingual document classification with disjoint label
sets, on a large dataset which we provide, with 600k news documents in 8
languages, and 5k labels. The multilingual models outperform monolingual ones
in low-resource as well as full-resource settings, and use fewer parameters,
thus confirming their computational efficiency and the utility of
cross-language transfer.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,739 | The complexity of recognizing minimally tough graphs | Let $t$ be a positive real number. A graph is called $t$-tough, if the
removal of any cutset $S$ leaves at most $|S|/t$ components. The toughness of a
graph is the largest $t$ for which the graph is $t$-tough. A graph is minimally
$t$-tough, if the toughness of the graph is $t$ and the deletion of any edge
from the graph decreases the toughness. The complexity class DP is the set of
all languages that can be expressed as the intersection of a language in NP and
a language in coNP. We prove that recognizing minimally $t$-tough graphs is
DP-complete for any positive integer $t$ and for any positive rational number
$t \leq 1/2$.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,740 | Homeostatic plasticity and external input shape neural network dynamics | In vitro and in vivo spiking activity clearly differ. Whereas networks in
vitro develop strong bursts separated by periods of very little spiking
activity, in vivo cortical networks show continuous activity. This is puzzling
considering that both networks presumably share similar single-neuron dynamics
and plasticity rules. We propose that the defining difference between in vitro
and in vivo dynamics is the strength of external input. In vitro, networks are
virtually isolated, whereas in vivo every brain area receives continuous input.
We analyze a model of spiking neurons in which the input strength, mediated by
spike rate homeostasis, determines the characteristics of the dynamical state.
In more detail, our analytical and numerical results on various network
topologies show consistently that under increasing input, homeostatic
plasticity generates distinct dynamic states, from bursting, to
close-to-critical, reverberating and irregular states. This implies that the
dynamic state of a neural network is not fixed but can readily adapt to the
input strengths. Indeed, our results match experimental spike recordings in
vitro and in vivo: the in vitro bursting behavior is consistent with a state
generated by very low network input (< 0.1%), whereas in vivo activity suggests
that on the order of 1% recorded spikes are input-driven, resulting in
reverberating dynamics. Importantly, this predicts that one can abolish the
ubiquitous bursts of in vitro preparations, and instead impose dynamics
comparable to in vivo activity by exposing the system to weak long-term
stimulation, thereby opening new paths to establish an in vivo-like assay in
vitro for basic as well as neurological studies.
| 0 | 0 | 0 | 0 | 1 | 0 |
16,741 | How Sensitive are Sensitivity-Based Explanations? | We propose a simple objective evaluation measure for explanations of a
complex black-box machine learning model. While most such model explanations
have largely been evaluated via qualitative measures, such as how humans might
qualitatively perceive the explanations, it is vital to also consider objective
measures such as the one we propose in this paper. Our evaluation measure that
we naturally call sensitivity is simple: it characterizes how an explanation
changes as we vary the test input, and depending on how we measure these
changes, and how we vary the input, we arrive at different notions of
sensitivity. We also provide a calculus for deriving sensitivity of complex
explanations in terms of that for simpler explanations, which thus allows an
easy computation of sensitivities for yet to be proposed explanations. One
advantage of an objective evaluation measure is that we can optimize the
explanation with respect to the measure: we show that (1) any given explanation
can be simply modified to improve its sensitivity with just a modest deviation
from the original explanation, and (2) gradient based explanations of an
adversarially trained network are less sensitive. Perhaps surprisingly, our
experiments show that explanations optimized to have lower sensitivity can be
more faithful to the model predictions.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,742 | Synchronization Strings: Channel Simulations and Interactive Coding for Insertions and Deletions | We present many new results related to reliable (interactive) communication
over insertion-deletion channels. Synchronization errors, such as insertions
and deletions, strictly generalize the usual symbol corruption errors and are
much harder to protect against.
We show how to hide the complications of synchronization errors in many
applications by introducing very general channel simulations which efficiently
transform an insertion-deletion channel into a regular symbol corruption
channel with an error rate larger by a constant factor and a slightly smaller
alphabet. We generalize synchronization string based methods which were
recently introduced as a tool to design essentially optimal error correcting
codes for insertion-deletion channels. Our channel simulations depend on the
fact that, at the cost of increasing the error rate by a constant factor,
synchronization strings can be decoded in a streaming manner that preserves
linearity of time. We also provide a lower bound showing that this constant
factor cannot be improved to $1+\epsilon$, in contrast to what is achievable
for error correcting codes. Our channel simulations drastically generalize the
applicability of synchronization strings.
We provide new interactive coding schemes which simulate any interactive
two-party protocol over an insertion-deletion channel. Our results improve over
the interactive coding schemes of Braverman et al. [TransInf 2017] and Sherstov
and Wu [FOCS 2017], which achieve a small constant rate and require exponential
time computations, with respect to computational and communication
complexities. We provide the first computationally efficient interactive coding
schemes for synchronization errors, the first coding scheme with a rate
approaching one for small noise rates, and also the first coding scheme that
works over arbitrarily small alphabet sizes.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,743 | Learning to Generate Samples from Noise through Infusion Training | In this work, we investigate a novel training procedure to learn a generative
model as the transition operator of a Markov chain, such that, when applied
repeatedly on an unstructured random noise sample, it will denoise it into a
sample that matches the target distribution from the training set. The novel
training procedure to learn this progressive denoising operation involves
sampling from a slightly different chain than the model chain used for
generation in the absence of a denoising target. In the training chain we
infuse information from the training target example that we would like the
chains to reach with a high probability. The thus learned transition operator
is able to produce quality and varied samples in a small number of steps.
Experiments show competitive results compared to the samples generated with a
basic Generative Adversarial Net
| 1 | 0 | 0 | 1 | 0 | 0 |
16,744 | Worst-case vs Average-case Design for Estimation from Fixed Pairwise Comparisons | Pairwise comparison data arises in many domains, including tournament
rankings, web search, and preference elicitation. Given noisy comparisons of a
fixed subset of pairs of items, we study the problem of estimating the
underlying comparison probabilities under the assumption of strong stochastic
transitivity (SST). We also consider the noisy sorting subclass of the SST
model. We show that when the assignment of items to the topology is arbitrary,
these permutation-based models, unlike their parametric counterparts, do not
admit consistent estimation for most comparison topologies used in practice. We
then demonstrate that consistent estimation is possible when the assignment of
items to the topology is randomized, thus establishing a dichotomy between
worst-case and average-case designs. We propose two estimators in the
average-case setting and analyze their risk, showing that it depends on the
comparison topology only through the degree sequence of the topology. The rates
achieved by these estimators are shown to be optimal for a large class of
graphs. Our results are corroborated by simulations on multiple comparison
topologies.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,745 | Decoupling multivariate polynomials: interconnections between tensorizations | Decoupling multivariate polynomials is useful for obtaining an insight into
the workings of a nonlinear mapping, performing parameter reduction, or
approximating nonlinear functions. Several different tensor-based approaches
have been proposed independently for this task, involving different tensor
representations of the functions, and ultimately leading to a canonical
polyadic decomposition.
We first show that the involved tensors are related by a linear
transformation, and that their CP decompositions and uniqueness properties are
closely related. This connection provides a way to better assess which of the
methods should be favored in certain problem settings, and may be a starting
point to unify the two approaches. Second, we show that taking into account the
previously ignored intrinsic structure in the tensor decompositions improves
the uniqueness properties of the decompositions and thus enlarges the
applicability range of the methods.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,746 | Arcades: A deep model for adaptive decision making in voice controlled smart-home | In a voice-controlled smart-home, a controller must respond not only to
user's requests but also according to the interaction context. This paper
describes Arcades, a system which uses deep reinforcement learning to extract
context from a graphical representation of home automation system and to update
continuously its behavior to the user's one. This system is robust to changes
in the environment (sensor breakdown or addition) through its graphical
representation (scale well) and the reinforcement mechanism (adapt well). The
experiments on realistic data demonstrate that this method promises to reach
long life context-aware control of smart-home.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,747 | Evolutionary Centrality and Maximal Cliques in Mobile Social Networks | This paper introduces an evolutionary approach to enhance the process of
finding central nodes in mobile networks. This can provide essential
information and important applications in mobile and social networks. This
evolutionary approach considers the dynamics of the network and takes into
consideration the central nodes from previous time slots. We also study the
applicability of maximal cliques algorithms in mobile social networks and how
it can be used to find the central nodes based on the discovered maximal
cliques. The experimental results are promising and show a significant
enhancement in finding the central nodes.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,748 | Band structure engineered layered metals for low-loss plasmonics | Plasmonics currently faces the problem of seemingly inevitable optical losses
occurring in the metallic components that challenges the implementation of
essentially any application. In this work we show that Ohmic losses are reduced
in certain layered metals, such as the transition metal dichalcogenide TaS$_2$,
due to an extraordinarily small density of states for scattering in the near-IR
originating from their special electronic band structure. Based on this
observation we propose a new class of band structure engineered van der Waals
layered metals composed of hexagonal transition metal chalcogenide-halide
layers with greatly suppressed intrinsic losses. Using first-principles
calculations we show that the suppression of optical losses lead to improved
performance for thin film waveguiding and transformation optics.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,749 | Acceleration through Optimistic No-Regret Dynamics | We consider the problem of minimizing a smooth convex function by reducing
the optimization to computing the Nash equilibrium of a particular zero-sum
convex-concave game. Zero-sum games can be solved using online learning
dynamics, where a classical technique involves simulating two no-regret
algorithms that play against each other and, after $T$ rounds, the average
iterate is guaranteed to solve the original optimization problem with error
decaying as $O(\log T/T)$. In this paper we show that the technique can be
enhanced to a rate of $O(1/T^2)$ by extending recent work \cite{RS13,SALS15}
that leverages \textit{optimistic learning} to speed up equilibrium
computation. The resulting optimization algorithm derived from this analysis
coincides \textit{exactly} with the well-known \NA \cite{N83a} method, and
indeed the same story allows us to recover several variants of the Nesterov's
algorithm via small tweaks. We are also able to establish the accelerated
linear rate for a function which is both strongly-convex and smooth. This
methodology unifies a number of different iterative optimization methods: we
show that the \HB algorithm is precisely the non-optimistic variant of \NA, and
recent prior work already established a similar perspective on \FW
\cite{AW17,ALLW18}.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,750 | A Full Bayesian Model to Handle Structural Ones and Missingness in Economic Evaluations from Individual-Level Data | Economic evaluations from individual-level data are an important component of
the process of technology appraisal, with a view to informing resource
allocation decisions. A critical problem in these analyses is that both
effectiveness and cost data typically present some complexity (e.g. non
normality, spikes and missingness) that should be addressed using appropriate
methods. However, in routine analyses, simple standardised approaches are
typically used, possibly leading to biased inferences. We present a general
Bayesian framework that can handle the complexity. We show the benefits of
using our approach with a motivating example, the MenSS trial, for which there
are spikes at one in the effectiveness and missingness in both outcomes. We
contrast a set of increasingly complex models and perform sensitivity analysis
to assess the robustness of the conclusions to a range of plausible missingness
assumptions. This paper highlights the importance of adopting a comprehensive
modelling approach to economic evaluations and the strategic advantages of
building these complex models within a Bayesian framework.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,751 | Misconceptions about Calorimetry | In the past 50 years, calorimeters have become the most important detectors
in many particle physics experiments, especially experiments in colliding-beam
accelerators at the energy frontier. In this paper, we describe and discuss a
number of common misconceptions about these detectors, as well as the
consequences of these misconceptions. We hope that it may serve as a useful
source of information for young colleagues who want to familiarize themselves
with these tricky instruments.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,752 | Exothermicity is not a necessary condition for enhanced diffusion of enzymes | Recent experiments have revealed that the diffusivity of exothermic and fast
enzymes is enhanced when they are catalytically active, and different physical
mechanisms have been explored and quantified to account for this observation.
We perform measurements on the endothermic and relatively slow enzyme aldolase,
which also shows substrate-induced enhanced diffusion. We propose a new
physical paradigm, which reveals that the diffusion coefficient of a model
enzyme hydrodynamically coupled to its environment increases significantly when
undergoing changes in conformational fluctuations in a substrate-dependent
manner, and is independent of the overall turnover rate of the underlying
enzymatic reaction. Our results show that substrate-induced enhanced diffusion
of enzyme molecules can be explained within an equilibrium picture, and that
the exothermicity of the catalyzed reaction is not a necessary condition for
the observation of this phenomenon.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,753 | Automatic Detection of Knee Joints and Quantification of Knee Osteoarthritis Severity using Convolutional Neural Networks | This paper introduces a new approach to automatically quantify the severity
of knee OA using X-ray images. Automatically quantifying knee OA severity
involves two steps: first, automatically localizing the knee joints; next,
classifying the localized knee joint images. We introduce a new approach to
automatically detect the knee joints using a fully convolutional neural network
(FCN). We train convolutional neural networks (CNN) from scratch to
automatically quantify the knee OA severity optimizing a weighted ratio of two
loss functions: categorical cross-entropy and mean-squared loss. This joint
training further improves the overall quantification of knee OA severity, with
the added benefit of naturally producing simultaneous multi-class
classification and regression outputs. Two public datasets are used to evaluate
our approach, the Osteoarthritis Initiative (OAI) and the Multicenter
Osteoarthritis Study (MOST), with extremely promising results that outperform
existing approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,754 | Data-Driven Filtered Reduced Order Modeling Of Fluid Flows | We propose a data-driven filtered reduced order model (DDF-ROM) framework for
the numerical simulation of fluid flows. The novel DDF-ROM framework consists
of two steps: (i) In the first step, we use explicit ROM spatial filtering of
the nonlinear PDE to construct a filtered ROM. This filtered ROM is
low-dimensional, but is not closed (because of the nonlinearity in the given
PDE). (ii) In the second step, we use data-driven modeling to close the
filtered ROM, i.e., to model the interaction between the resolved and
unresolved modes. To this end, we use a quadratic ansatz to model this
interaction and close the filtered ROM. To find the new coefficients in the
closed filtered ROM, we solve an optimization problem that minimizes the
difference between the full order model data and our ansatz. We emphasize that
the new DDF-ROM is built on general ideas of spatial filtering and optimization
and is independent of (restrictive) phenomenological arguments.
We investigate the DDF-ROM in the numerical simulation of a 2D channel flow
past a circular cylinder at Reynolds number $Re=100$. The DDF-ROM is
significantly more accurate than the standard projection ROM. Furthermore, the
computational costs of the DDF-ROM and the standard projection ROM are similar,
both costs being orders of magnitude lower than the computational cost of the
full order model. We also compare the new DDF-ROM with modern ROM closure
models in the numerical simulation of the 1D Burgers equation. The DDF-ROM is
more accurate and significantly more efficient than these ROM closure models.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,755 | FEAST Eigensolver for Nonlinear Eigenvalue Problems | The linear FEAST algorithm is a method for solving linear eigenvalue
problems. It uses complex contour integration to calculate the eigenvectors
whose eigenvalues that are located inside some user-defined region in the
complex plane. This makes it possible to parallelize the process of solving
eigenvalue problems by simply dividing the complex plane into a collection of
disjoint regions and calculating the eigenpairs in each region independently of
the eigenpairs in the other regions. In this paper we present a generalization
of the linear FEAST algorithm that can be used to solve nonlinear eigenvalue
problems. Like its linear progenitor, the nonlinear FEAST algorithm can be used
to solve nonlinear eigenvalue problems for the eigenpairs whose eigenvalues lie
in a user-defined region in the complex plane, thereby allowing for the
calculation of large numbers of eigenpairs in parallel. We describe the
nonlinear FEAST algorithm, and use several physically-motivated examples to
demonstrate its properties.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,756 | Twin Primes In Quadratic Arithmetic Progressions | A recent heuristic argument based on basic concepts in spectral analysis
showed that the twin prime conjecture and a few other related primes counting
problems are valid. A rigorous version of the spectral method, and a proof for
the existence of infinitely many quadratic twin primes $n^{2}+1$ and $n^{2}+3$,
$n \geq 1$, are proposed in this note.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,757 | Bit Complexity of Computing Solutions for Symmetric Hyperbolic Systems of PDEs with Guaranteed Precision | We establish upper bounds of bit complexity of computing solution operators
for symmetric hyperbolic systems of PDEs. Here we continue the research started
in in our revious publications where computability, in the rigorous sense of
computable analysis, has been established for solution operators of Cauchy and
dissipative boundary-value problems for such systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,758 | Relativistic corrections for the ground electronic state of molecular hydrogen | We recalculate the leading relativistic corrections for the ground electronic
state of the hydrogen molecule using variational method with explicitly
correlated functions which satisfy the interelectronic cusp condition. The new
computational approach allowed for the control of the numerical precision which
reached about 8 significant digits. More importantly, the updated theoretical
energies became discrepant with the known experimental values and we conclude
that the yet unknown relativistic recoil corrections might be larger than
previously anticipated.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,759 | Homogenization in Perforated Domains and Interior Lipschitz Estimates | We establish interior Lipschitz estimates at the macroscopic scale for
solutions to systems of linear elasticity with rapidly oscillating periodic
coefficients and mixed boundary conditions in domains periodically perforated
at a microscopic scale $\varepsilon$ by establishing $H^1$-convergence rates
for such solutions. The interior estimates are derived directly without the use
of compactness via an argument presented in [3] that was adapted for elliptic
equations in [2] and [11]. As a consequence, we derive a Liouville type
estimate for solutions to the systems of linear elasticity in unbounded
periodically perforated domains.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,760 | Opinion-Based Centrality in Multiplex Networks: A Convex Optimization Approach | Most people simultaneously belong to several distinct social networks, in
which their relations can be different. They have opinions about certain
topics, which they share and spread on these networks, and are influenced by
the opinions of other persons. In this paper, we build upon this observation to
propose a new nodal centrality measure for multiplex networks. Our measure,
called Opinion centrality, is based on a stochastic model representing opinion
propagation dynamics in such a network. We formulate an optimization problem
consisting in maximizing the opinion of the whole network when controlling an
external influence able to affect each node individually. We find a
mathematical closed form of this problem, and use its solution to derive our
centrality measure. According to the opinion centrality, the more a node is
worth investing external influence, and the more it is central. We perform an
empirical study of the proposed centrality over a toy network, as well as a
collection of real-world networks. Our measure is generally negatively
correlated with existing multiplex centrality measures, and highlights
different types of nodes, accordingly to its definition.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,761 | FDTD: solving 1+1D delay PDE in parallel | We present a proof of concept for solving a 1+1D complex-valued, delay
partial differential equation (PDE) that emerges in the study of waveguide
quantum electrodynamics (QED) by adapting the finite-difference time-domain
(FDTD) method. The delay term is spatially non-local, rendering conventional
approaches such as the method of lines inapplicable. We show that by properly
designing the grid and by supplying the (partial) exact solution as the
boundary condition, the delay PDE can be numerically solved. In addition, we
demonstrate that while the delay imposes strong data dependency, multi-thread
parallelization can nevertheless be applied to such a problem. Our code
provides a numerically exact solution to the time-dependent multi-photon
scattering problem in waveguide QED.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,762 | Optimal Installation for Electric Vehicle Wireless Charging Lanes | Range anxiety, the persistent worry about not having enough battery power to
complete a trip, remains one of the major obstacles to widespread
electric-vehicle adoption. As cities look to attract more users to adopt
electric vehicles, the emergence of wireless in-motion car charging technology
presents itself as a solution to range anxiety. For a limited budget, cities
could face the decision problem of where to install these wireless charging
units. With a heavy price tag, an installation without a careful study can lead
to inefficient use of limited resources. In this work, we model the
installation of wireless charging units as an integer programming problem. We
use our basic formulation as a building block for different realistic
scenarios, carry out experiments using real geospatial data, and compare our
results to different heuristics.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,763 | Improved Fixed-Rank Nyström Approximation via QR Decomposition: Practical and Theoretical Aspects | The Nyström method is a popular technique for computing fixed-rank
approximations of large kernel matrices using a small number of landmark
points. In practice, to ensure high quality approximations, the number of
landmark points is chosen to be greater than the target rank. However, the
standard Nyström method uses a sub-optimal procedure for rank reduction
mainly due to its simplicity. In this paper, we highlight the drawbacks of
standard Nyström in terms of poor performance and lack of theoretical
guarantees. To address these issues, we present an efficient method for
generating improved fixed-rank Nyström approximations. Theoretical analysis
and numerical experiments are provided to demonstrate the advantages of the
modified method over the standard Nyström method. Overall, the aim of this
paper is to convince researchers to use the modified method, as it has nearly
identical computational complexity, is easy to code, and has greatly improved
accuracy in many cases.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,764 | Population splitting of rodlike swimmers in Couette flow | We present a quantitative analysis on the response of a dilute active
suspension of self-propelled rods (swimmers) in a planar channel subjected to
an imposed shear flow. To best capture the salient features of shear-induced
effects, we consider the case of an imposed Couette flow, providing a constant
shear rate across the channel. We argue that the steady-state behavior of
swimmers can be understood in the light of a population splitting phenomenon,
occurring as the shear rate exceeds a certain threshold, initiating the
reversal of swimming direction for a finite fraction of swimmers from down- to
upstream or vice versa, depending on swimmer position within the channel.
Swimmers thus split into two distinct, statistically significant and oppositely
swimming majority and minority populations. The onset of population splitting
translates into a transition from a self-propulsion-dominated regime to a
shear-dominated regime, corresponding to a unimodal-to-bimodal change in the
probability distribution function of the swimmer orientation. We present a
phase diagram in terms of the swim and flow Peclet numbers showing the
separation of these two regimes by a discontinuous transition line. Our results
shed further light on the behavior of swimmers in a shear flow and provide an
explanation for the previously reported non-monotonic behavior of the mean,
near-wall, parallel-to-flow orientation of swimmers with increasing shear
strength.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,765 | MultiAmdahl: Optimal Resource Allocation in Heterogeneous Architectures | Future multiprocessor chips will integrate many different units, each
tailored to a specific computation. When designing such a system, the chip
architect must decide how to distribute limited system resources such as area,
power, and energy among the computational units. We extend MultiAmdahl, an
analytical optimization technique for resource allocation in heterogeneous
architectures, for energy optimality under a variety of constant system power
scenarios. We conclude that reduction in constant system power should be met by
reallocating resources from general-purpose computing to heterogeneous
accelerator-dominated computing, to keep the overall energy consumption at a
minimum. We extend this conclusion to offer an intuition regarding
energy-optimal resource allocation in data center computing.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,766 | Coset Vertex Operator Algebras and $\W$-Algebras | We give an explicit description for the weight three generator of the coset
vertex operator algebra $C_{L_{\widehat{\sl_{n}}}(l,0)\otimes
L_{\widehat{\sl_{n}}}(1,0)}(L_{\widehat{\sl_{n}}}(l+1,0))$, for $n\geq 2, l\geq
1$. Furthermore, we prove that the commutant
$C_{L_{\widehat{\sl_{3}}}(l,0)\otimes
L_{\widehat{\sl_{3}}}(1,0)}(L_{\widehat{\sl_{3}}}(l+1,0))$ is isomorphic to the
$\W$-algebra $\W_{-3+\frac{l+3}{l+4}}(\sl_3)$, which confirms the conjecture
for the $\sl_3$ case that $C_{L_{\widehat{\frak g}}(l,0)\otimes
L_{\widehat{\frak g}}(1,0)}(L_{\widehat{\frak g}}(l+1,0))$ is isomorphic to
$\W_{-h+\frac{l+h}{l+h+1}}(\frak g)$ for simply-laced Lie algebras ${\frak g}$
with its Coxeter number $h$ for a positive integer $l$.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,767 | The Moore and the Myhill Property For Strongly Irreducible Subshifts Of Finite Type Over Group Sets | We prove the Moore and the Myhill property for strongly irreducible subshifts
over right amenable and finitely right generated left homogeneous spaces with
finite stabilisers. Both properties together mean that the global transition
function of each big-cellular automaton with finite set of states and finite
neighbourhood over such a subshift is surjective if and only if it is
pre-injective. This statement is known as Garden of Eden theorem.
Pre-Injectivity means that two global configurations that differ at most on a
finite subset and have the same image under the global transition function must
be identical.
| 1 | 0 | 1 | 0 | 0 | 0 |
16,768 | eSource for clinical trials: Implementation and evaluation of a standards-based approach in a real world trial | Objective: The Learning Health System (LHS) requires integration of research
into routine practice. eSource or embedding clinical trial functionalities into
routine electronic health record (EHR) systems has long been put forward as a
solution to the rising costs of research. We aimed to create and validate an
eSource solution that would be readily extensible as part of a LHS.
Materials and Methods: The EU FP7 TRANSFoRm project's approach is based on
dual modelling, using the Clinical Research Information Model (CRIM) and the
Clinical Data Integration Model of meaning (CDIM) to bridge the gap between
clinical and research data structures, using the CDISC Operational Data Model
(ODM) standard. Validation against GCP requirements was conducted in a clinical
site, and a cluster randomised evaluation by site nested into a live clinical
trial.
Results: Using the form definition element of ODM, we linked precisely
modelled data queries to data elements, constrained against CDIM concepts, to
enable automated patient identification for specific protocols and
prepopulation of electronic case report forms (e-CRF). Both control and eSource
sites recruited better than expected with no significant difference.
Completeness of clinical forms was significantly improved by eSource, but
Patient Related Outcome Measures (PROMs) were less well completed on
smartphones than paper in this population.
Discussion: The TRANSFoRm approach provides an ontologically-based approach
to eSource in a low-resource, heterogeneous, highly distributed environment,
that allows precise prospective mapping of data elements in the EHR.
Conclusion: Further studies using this approach to CDISC should optimise the
delivery of PROMS, whilst building a sustainable infrastructure for eSource
with research networks, trials units and EHR vendors.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,769 | SMILES Enumeration as Data Augmentation for Neural Network Modeling of Molecules | Simplified Molecular Input Line Entry System (SMILES) is a single line text
representation of a unique molecule. One molecule can however have multiple
SMILES strings, which is a reason that canonical SMILES have been defined,
which ensures a one to one correspondence between SMILES string and molecule.
Here the fact that multiple SMILES represent the same molecule is explored as a
technique for data augmentation of a molecular QSAR dataset modeled by a long
short term memory (LSTM) cell based neural network. The augmented dataset was
130 times bigger than the original. The network trained with the augmented
dataset shows better performance on a test set when compared to a model built
with only one canonical SMILES string per molecule. The correlation coefficient
R2 on the test set was improved from 0.56 to 0.66 when using SMILES
enumeration, and the root mean square error (RMS) likewise fell from 0.62 to
0.55. The technique also works in the prediction phase. By taking the average
per molecule of the predictions for the enumerated SMILES a further improvement
to a correlation coefficient of 0.68 and a RMS of 0.52 was found.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,770 | Binary Tomography Reconstructions With Few Projections | We approach the tomographic problem in terms of linear system of equations
$A\mathbf{x}=\mathbf{p}$ in an $(M\times N)$-sized lattice grid $\mathcal{A}$.
Using a finite number of directions always yields the presence of ghosts, so
preventing uniqueness. Ghosts can be managed by increasing the number of
directions, which implies that also the number of collected projections (also
called bins) increases. Therefore, for a best performing outcome, a kind of
compromise should be sought among the number of employed directions, the number
of collected projections, and the percentage of exactly reconstructed image. In
this paper we wish to investigate such a problem in the case of binary images.
We move from a theoretical result that allow uniqueness in $\mathcal{A}$ with
just four suitably selected X-ray directions. This is exploited in studying the
structure of the allowed ghosts in the given lattice grid. The knowledge of the
ghost sizes, combined with geometrical information concerning the real valued
solution of $A\mathbf{x}=\mathbf{p}$ having minimal Euclidean norm, leads to an
explicit implementation of the previously obtained uniqueness theorem. This
provides an easy binary algorithm (BRA) that, in the grid model, quickly
returns perfect noise-free tomographic reconstructions.
Then we focus on the tomography-side relevant problem of reducing the number
of collected projections and, in the meantime, preserving a good quality of
reconstruction. It turns out that, using sets of just four suitable directions,
a high percentage of reconstructed pixel is preserved, even when the size of
the projection vector $\mathbf{p}$ is considerably smaller than the size of the
image to be reconstructed.
Results are commented and discussed, also showing applications of BRA on
phantoms with different features.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,771 | On finite determinacy of complete intersection singularities | We give an elementary combinatorial proof of the following fact: Every real
or complex analytic complete intersection germ X is equisingular -- in the
sense of the Hilbert-Samuel function -- with a germ of an algebraic set defined
by sufficiently long truncations of the defining equations of X.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,772 | Topological phase transformations and intrinsic size effects in ferroelectric nanoparticles | Composite materials comprised of ferroelectric nanoparticles in a dielectric
matrix are being actively investigated for a variety of functional properties
attractive for a wide range of novel electronic and energy harvesting devices.
However, the dependence of these functionalities on shapes, sizes, orientation
and mutual arrangement of ferroelectric particles is currently not fully
understood. In this study, we utilize a time-dependent Ginzburg-Landau approach
combined with coupled-physics finite-element-method based simulations to
elucidate the behavior of polarization in isolated spherical PbTiO3 or BaTiO3
nanoparticles embedded in a dielectric medium, including air. The equilibrium
polarization topology is strongly affected by particle diameter, as well as the
choice of inclusion and matrix materials, with monodomain, vortex-like and
multidomain patterns emerging for various combinations of size and materials
parameters. This leads to radically different polarization vs electric field
responses, resulting in highly tunable size-dependent dielectric properties
that should be possible to observe experimentally. Our calculations show that
there is a critical particle size below which ferroelectricity vanishes. For
the PbTiO3 particle, this size is 2 and 3.4 nm, respectively, for high- and
low-permittivity media. For the BaTiO3 particle, it is ~3.6 nm regardless of
the medium dielectric strength.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,773 | Honors Thesis: On the faithfulness of the Burau representation at roots of unity | We study the kernel of the evaluated Burau representation through the braid
element $\sigma_i \sigma_{i+1} \sigma_i$. The element is significant as a part
of the standard braid relation. We establish the form of this element's image
raised to the $n^{th}$ power. Interestingly, the cyclotomic polynomials arise
and can be used to define the expression. The main result of this paper is that
the Burau representation of the braid group of $n$ strands for $n \geq 3$ is
unfaithful at any primitive root of unity, excepting the first three.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,774 | On Interpolation and Symbol Elimination in Theory Extensions | In this paper we study possibilities of interpolation and symbol elimination
in extensions of a theory $\mathcal{T}_0$ with additional function symbols
whose properties are axiomatised using a set of clauses. We analyze situations
in which we can perform such tasks in a hierarchical way, relying on existing
mechanisms for symbol elimination in $\mathcal{T}_0$. This is for instance
possible if the base theory allows quantifier elimination. We analyze
possibilities of extending such methods to situations in which the base theory
does not allow quantifier elimination but has a model completion which does. We
illustrate the method on various examples.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,775 | Stellar Abundances for Galactic Archaeology Database IV - Compilation of Stars in Dwarf Galaxies | We have constructed the database of stars in the local group using the
extended version of the SAGA (Stellar Abundances for Galactic Archaeology)
database that contains stars in 24 dwarf spheroidal galaxies and ultra faint
dwarfs. The new version of the database includes more than 4500 stars in the
Milky Way, by removing the previous metallicity criterion of [Fe/H] <= -2.5,
and more than 6000 stars in the local group galaxies. We examined a validity of
using a combined data set for elemental abundances. We also checked a
consistency between the derived distances to individual stars and those to
galaxies in the literature values. Using the updated database, the
characteristics of stars in dwarf galaxies are discussed. Our statistical
analyses of alpha-element abundances show that the change of the slope of the
[alpha/Fe] relative to [Fe/H] (so-called "knee") occurs at [Fe/H] = -1.0+-0.1
for the Milky Way. The knee positions for selected galaxies are derived by
applying the same method. Star formation history of individual galaxies are
explored using the slope of the cumulative metallicity distribution function.
Radial gradients along the four directions are inspected in six galaxies where
we find no direction dependence of metallicity gradients along the major and
minor axes. The compilation of all the available data shows a lack of CEMP-s
population in dwarf galaxies, while there may be some CEMP-no stars at [Fe/H]
<~ -3 even in the very small sample. The inspection of the relationship between
Eu and Ba abundances confirms an anomalously Ba-rich population in Fornax,
which indicates a pre-enrichment of interstellar gas with r-process elements.
We do not find any evidence of anti-correlations in O-Na and Mg-Al abundances,
which characterises the abundance trends in the Galactic globular clusters.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,776 | Relativistic distortions in the large-scale clustering of SDSS-III BOSS CMASS galaxies | General relativistic effects have long been predicted to subtly influence the
observed large-scale structure of the universe. The current generation of
galaxy redshift surveys have reached a size where detection of such effects is
becoming feasible. In this paper, we report the first detection of the redshift
asymmetry from the cross-correlation function of two galaxy populations which
is consistent with relativistic effects. The dataset is taken from the Sloan
Digital Sky Survey DR12 CMASS galaxy sample, and we detect the asymmetry at the
$2.7\sigma$ level by applying a shell-averaged estimator to the
cross-correlation function. Our measurement dominates at scales around $10$
h$^{-1}$Mpc, larger than those over which the gravitational redshift profile
has been recently measured in galaxy clusters, but smaller than scales for
which linear perturbation theory is likely to be accurate. The detection
significance varies by 0.5$\sigma$ with the details of our measurement and
tests for systematic effects. We have also devised two null tests to check for
various survey systematics and show that both results are consistent with the
null hypothesis. We measure the dipole moment of the cross-correlation
function, and from this the asymmetry is also detected, at the $2.8 \sigma$
level. The amplitude and scale-dependence of the clustering asymmetries are
approximately consistent with the expectations of General Relativity and a
biased galaxy population, within large uncertainties. We explore theoretical
predictions using numerical simulations in a companion paper.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,777 | Superconductivity at the vacancy disorder boundary in K$_x$Fe$_{2-y}$Se$_2$ | The role of phase separation in the emergence of superconductivity in alkali
metal doped iron selenides A$_{x}$Fe$_{2-y}$Se$_{2}$ (A = K, Rb, Cs) is
revisited. High energy X-ray diffraction and Monte Carlo simulation were used
to investigate the crystal structure of quenched superconducting (SC) and
as-grown non-superconducting (NSC) K$_{x}$Fe$_{2-y}$Se$_{2}$ single crystals.
The coexistence of superlattice structures with the in-plane
$\sqrt{2}\times\sqrt{2}$ K-vacancy ordering and the $\sqrt{5}\times\sqrt{5}$
Fe-vacancy ordering were observed in SC and NSC crystals along side the
\textit{I4/mmm} Fe-vacancy free phase. Moreover, in the SC crystal an
Fe-vacancy disordered phase is additionally present. It appears at the boundary
between the \textit{I4/mmm} vacancy free phase and the \textit{I4/m} vacancy
ordered phase ($\sqrt{5}\times\sqrt{5}$). The vacancy disordered phase is most
likely the host of superconductivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.