text
stringlengths 8
3.91k
| label
int64 0
10
|
---|---|
abstract
we describe a class of systems theory based neural networks
called “network of recurrent neural networks” (nor),
which introduces a new structure level to rnn related models. in nor, rnns are viewed as the high-level neurons and
are used to build the high-level layers. more specifically,
we propose several methodologies to design different nor
topologies according to the theory of system evolution. then
we carry experiments on three different tasks to evaluate our
implementations. experimental results show our models outperform simple rnn remarkably under the same number of
parameters, and sometimes achieve even better results than
gru and lstm.
| 9 |
abstract. in [bc], the second de rham cohomology groups of nilpotent orbits in all the complex
simple lie algebras are described. in this paper we consider non-compact non-complex exceptional
lie algebras, and compute the dimensions of the second cohomology groups for most of the nilpotent
orbits. for the rest of cases of nilpotent orbits, which are not covered in the above computations,
we obtain upper bounds for the dimensions of the second cohomology groups.
| 4 |
abstract
let r be a commutative ring with identity and specs (m ) denote the
set all second submodules of an r-module m . in this paper, we construct and study a sheaf of modules, denoted by o(n, m ), on specs (m )
equipped with the dual zariski topology of m , where n is an r-module.
we give a characterization of the sections of the sheaf o(n, m ) in terms
of the ideal transform module. we present some interrelations between
algebraic properties of n and the sections of o(n, m ). we obtain some
morphisms of sheaves induced by ring and module homomorphisms.
2010 mathematics subject classification: 13c13, 13c99, 14a15,
14a05.
keywords and phrases: second submodule, dual zariski topology,
sheaf of modules.
| 0 |
abstract
automatic multi-organ segmentation of the dual energy computed tomography (dect) data can be beneficial for biomedical research and clinical applications. however, it is a challenging task. recent advances in deep learning showed the
feasibility to use 3-d fully convolutional networks (fcn)
for voxel-wise dense predictions in single energy computed
tomography (sect). in this paper, we proposed a 3d fcn
based method for automatic multi-organ segmentation in
dect. the work was based on a cascaded fcn and a general model for the major organs trained on a large set of
sect data. we preprocessed the dect data by using linear
weighting and fine-tuned the model for the dect data. the
method was evaluated using 42 torso dect data acquired
with a clinical dual-source ct system. four abdominal organs (liver, spleen, left and right kidneys) were evaluated.
cross-validation was tested. effect of the weight on the accuracy was researched. in all the tests, we achieved an average
dice coefficient of 93% for the liver, 90% for the spleen, 91%
for the right kidney and 89% for the left kidney, respectively.
the results show our method is feasible and promising.
index terms— dect, deep learning, multi-organ segmentation, u-net
1. introduction
the hounsfield unit (hu) scale value depends on the inherent tissue properties, the x-ray spectrum for scanning and the
administered contrast media [1]. in a sect image, materials having different elemental compositions can be represented by identical hu values [2]. therefore, sect has challenges such as limited material-specific information and beam
hardening as well as tissue characterization [1]. dect has
| 1 |
abstract. we bring additional support to the conjecture saying that a rational
cuspidal plane curve is either free or nearly free. this conjecture was confirmed
for curves of even degree, and in this note we prove it for many odd degrees. in
particular, we show that this conjecture holds for the curves of degree at most 34.
| 0 |
abstract group g with the linear group ρ(g) ⊂ gl(v ). let vaff be the affine
space corresponding to v . the group of affine transformations of vaff whose linear part
lies in g may then be written g ⋉ v (where v stands for the group of translations). here
is the main result of this paper.
main theorem. suppose that ρ satisfies the following conditions:
(i) there exists a vector v ∈ v such that:
(a) ∀l ∈ l, l(v) = v, and
(b) w̃0 (v) 6= v, where w̃0 is any representative in g of w0 ∈ ng (a)/zg (a);
then there exists a subgroup γ in the affine group g ⋉ v whose linear part is zariskidense in g and that is free, nonabelian and acts properly discontinuously on the affine
space corresponding to v .
(note that the choice of the representative w̃0 in (i)(b) does not matter, precisely
because by (i)(a) the vector v is fixed by l = zg (a).)
remark 1.2. it is sufficient to prove the theorem in the case where ρ is irreducible.
indeed, we may decompose ρ into a direct sum of irreducible representations, and then
observe that:
• if some representation ρ1 ⊕ · · ·⊕ ρk has a vector (v1 , . . . , vk ) that satisfies conditions
(a) and (b), then at least one of the vectors vi must satisfy conditions (a) and (b);
• if v = v1 ⊕ v2 , and a subgroup γ ⊂ g ⋉ v1 acts properly on v1 , then its image i(γ)
by the canonical inclusion i : g ⋉ v1 → g ⋉ v still acts properly on v .
we shall start working with an arbitrary representation ρ, and gradually make stronger
and stronger hypotheses on it, introducing each one when we need it to make the construction work (so that it is at least partially motivated). here is the complete list of
places where new assumptions on ρ are introduced:
| 4 |
abstract
little by little, newspapers are revealing the bright future that artificial intelligence (ai) is building. intelligent machines will help everywhere. however, this
bright future has a dark side: a dramatic job market contraction before its unpredictable transformation. hence, in a near future, large numbers of job seekers
will need financial support while catching up with these novel unpredictable jobs.
this possible job market crisis has an antidote inside. in fact, the rise of ai is sustained by the biggest knowledge theft of the recent years. learning ai machines are
extracting knowledge from unaware skilled or unskilled workers by analyzing their
interactions. by passionately doing their jobs, these workers are digging their own
graves.
in this paper, we propose human-in-the-loop artificial intelligence (hit-ai)
as a fairer paradigm for artificial intelligence systems. hit-ai will reward aware
and unaware knowledge producers with a different scheme: decisions of ai systems
generating revenues will repay the legitimate owners of the knowledge used for taking
those decisions. as modern robin hoods, hit-ai researchers should fight for a fairer
artificial intelligence that gives back what it steals.
| 2 |
abstract. many results are known about test ideals and f -singularities for q-gorenstein
rings. in this paper we generalize many of these results to the case when the symbolic rees
algebra ox ⊕ ox (−kx ) ⊕ ox (−2kx ) ⊕ . . . is finitely generated (or more generally, in the
log setting for −kx − ∆). in particular, we show that the f -jumping numbers of τ (x, at )
are discrete and rational. we show that test ideals τ (x) can be described by alterations
as in blickle-schwede-tucker (and hence show that splinters are strongly f -regular in this
setting – recovering a result of singh). we demonstrate that multiplier ideals reduce to
test ideals under reduction modulo p when the symbolic rees algebra is finitely generated.
we prove that hartshorne-speiser-lyubeznik-gabber type stabilization still holds. we
also show that test ideals satisfy global generation properties in this setting.
| 0 |
abstract—due to the huge availability of documents in digital
form, and the deception possibility raise bound to the essence of
digital documents and the way they are spread, the authorship
attribution problem has constantly increased its relevance. nowadays, authorship attribution, for both information retrieval and
analysis, has gained great importance in the context of security,
trust and copyright preservation.
this work proposes an innovative multi-agent driven machine
learning technique that has been developed for authorship attribution. by means of a preprocessing for word-grouping and timeperiod related analysis of the common lexicon, we determine a
bias reference level for the recurrence frequency of the words
within analysed texts, and then train a radial basis neural
networks (rbpnn)-based classifier to identify the correct author.
the main advantage of the proposed approach lies in the generality of the semantic analysis, which can be applied to different
contexts and lexical domains, without requiring any modification.
moreover, the proposed system is able to incorporate an external
input, meant to tune the classifier, and then self-adjust by means
of continuous learning reinforcement.
| 9 |
abstract—nondeterminism in scheduling is the cardinal reason
for difficulty in proving correctness of concurrent programs.
a powerful proof strategy was recently proposed [6] to show
the correctness of such programs. the approach captured dataflow dependencies among the instructions of an interleaved and
error-free execution of threads. these data-flow dependencies
were represented by an inductive data-flow graph (idfg), which,
in a nutshell, denotes a set of executions of the concurrent
program that gave rise to the discovered data-flow dependencies.
the idfgs were further transformed in to alternative finite
automatons (afas) in order to utilize efficient automata-theoretic
tools to solve the problem. in this paper, we give a novel and
efficient algorithm to directly construct afas that capture the
data-flow dependencies in a concurrent program execution. we
implemented the algorithm in a tool called prooftrapar to
prove the correctness of finite state cyclic programs under the
sequentially consistent memory model. our results are encouranging and compare favorably to existing state-of-the-art tools.
| 6 |
abstract. we get the computable error bounds for generalized
cornish-fisher expansions for quantiles of statistics provided that
the computable error bounds for edgeworth-chebyshev type expansions for distributions of these statistics are known. the results
are illustrated by examples.
| 10 |
abstract
| 2 |
abstract
since its discovery, differential linear logic (dll) inspired numerous
domains. in denotational semantics, categorical models of dll are now
commune, and the simplest one is rel, the category of sets and relations.
in proof theory this naturally gave birth to differential proof nets that are
full and complete for dll. in turn, these tools can naturally be translated
to their intuitionistic counterpart. by taking the co-kleisly category associated to the ! comonad, rel becomes mrel, a model of the λ-calculus that
contains a notion of differentiation. proof nets can be used naturally to
extend the λ-calculus into the lambda calculus with resources, a calculus
that contains notions of linearity and differentiations. of course mrel is
a model of the λ-calculus with resources, and it has been proved adequate,
but is it fully abstract?
that was a strong conjecture of bucciarelli, carraro, ehrhard and
manzonetto in [4]. however, in this paper we exhibit a counter-example.
moreover, to give more intuition on the essence of the counter-example
and to look for more generality, we will use an extension of the resource
λ-calculus also introduced by bucciarelli et al in [4] for which m∞ is fully
abstract, the tests.
| 6 |
abstract. the whitney extension theorem is a classical result in analysis giving a necessary
and sufficient condition for a function defined on a closed set to be extendable to the whole
space with a given class of regularity. it has been adapted to several settings, among which
the one of carnot groups. however, the target space has generally been assumed to be equal
to rd for some d ≥ 1.
we focus here on the extendability problem for general ordered pairs (g1 , g2 ) (with g2
non-abelian). we analyze in particular the case g1 = r and characterize the groups g2 for
which the whitney extension property holds, in terms of a newly introduced notion that we
call pliability. pliability happens to be related to rigidity as defined by bryant an hsu. we
exploit this relation in order to provide examples of non-pliable carnot groups, that is, carnot
groups so that the whitney extension property does not hold. we use geometric control theory
results on the accessibility of control affine systems in order to test the pliability of a carnot
group. in particular, we recover some recent results by le donne, speight and zimmermann
about lusin approximation in carnot groups of step 2 and whitney extension in heisenberg
groups. we extend such results to all pliable carnot groups, and we show that the latter may
be of arbitrarily large step.
| 4 |
abstract
a well known n p-hard problem called the generalized traveling salesman
problem (gtsp) is considered. in gtsp the nodes of a complete undirected
graph are partitioned into clusters. the objective is to find a minimum cost
tour passing through exactly one node from each cluster.
an exact exponential time algorithm and an effective meta-heuristic algorithm for the problem are presented. the meta-heuristic proposed is a
modified ant colony system (acs) algorithm called reinforcing ant colony
system (racs) which introduces new correction rules in the acs algorithm.
computational results are reported for many standard test problems. the
proposed algorithm is competitive with the other already proposed heuristics
for the gtsp in both solution quality and computational time.
| 9 |
abstract
| 6 |
abstract
in this paper we present a general convex optimization approach for solving highdimensional multiple response tensor regression problems under low-dimensional structural assumptions. we consider using convex and weakly decomposable regularizers
assuming that the underlying tensor lies in an unknown low-dimensional subspace.
within our framework, we derive general risk bounds of the resulting estimate under
fairly general dependence structure among covariates. our framework leads to upper
bounds in terms of two very simple quantities, the gaussian width of a convex set in
tensor space and the intrinsic dimension of the low-dimensional tensor subspace. to
the best of our knowledge, this is the first general framework that applies to multiple response problems. these general bounds provide useful upper bounds on rates
of convergence for a number of fundamental statistical models of interest including
multi-response regression, vector auto-regressive models, low-rank tensor models and
pairwise interaction models. moreover, in many of these settings we prove that the
resulting estimates are minimax optimal. we also provide a numerical study that both
validates our theoretical guarantees and demonstrates the breadth of our framework.
∗
| 10 |
abstract-- for dynamic security assessment considering
uncertainties in grid operations, this paper proposes an approach
for time-domain simulation of a power system having stochastic
loads. the proposed approach solves a stochastic differential
equation model of the power system in a semi-analytical way using
the adomian decomposition method. the approach generates
semi-analytical solutions expressing both deterministic and
stochastic variables explicitly as symbolic variables so as to embed
stochastic processes directly into the solutions for efficient
simulation and analysis. the proposed approach is tested on the
new england 10-machine 39-bus system with different levels of
stochastic loads. the approach is also benchmarked with a
traditional stochastic simulation approach based on the eulermaruyama method. the results show that the new approach has
better time performance and a comparable accuracy.
index terms—adomian decomposition method, stochastic
differential equation, stochastic load, stochastic time-domain
simulation.
| 3 |
abstract
enhancing low resolution images via super-resolution
or image synthesis for cross-resolution face recognition
has been well studied. several image processing and machine learning paradigms have been explored for addressing the same. in this research, we propose synthesis via
deep sparse representation algorithm for synthesizing a
high resolution face image from a low resolution input image. the proposed algorithm learns multi-level sparse representation for both high and low resolution gallery images,
along with an identity aware dictionary and a transformation function between the two representations for face identification scenarios. with low resolution test data as input, the high resolution test image is synthesized using the
identity aware dictionary and transformation which is then
used for face recognition. the performance of the proposed
sdsr algorithm is evaluated on four databases, including
one real world dataset. experimental results and comparison with existing seven algorithms demonstrate the efficacy
of the proposed algorithm in terms of both face identification and image quality measures.
| 1 |
abstract. in this work we present a flexible tool for tumor progression,
which simulates the evolutionary dynamics of cancer. tumor progression
implements a multi-type branching process where the key parameters
are the fitness landscape, the mutation rate, and the average time of
cell division. the fitness of a cancer cell depends on the mutations it
has accumulated. the input to our tool could be any fitness landscape,
mutation rate, and cell division time, and the tool produces the growth
dynamics and all relevant statistics.
| 5 |
abstract
in coding theory, gray isometries are usually defined as mappings
between finite frobenius rings, which include the ring ℤ𝑚 of integers
modulo m and the finite fields. in this paper, we derive an isometric
mapping from ℤ8 to ℤ24 from the composition of the gray isometries on
ℤ8 and on ℤ24 . the image under this composition of a ℤ8 -linear block
code of length n with homogeneous distance d is a (not necessarily
linear) quaternary block code of length 2n with lee distance d.
| 7 |
abstract
many video processing algorithms rely on optical flow to
register different frames within a sequence. however, a precise estimation of optical flow is often neither tractable nor
optimal for a particular task. in this paper, we propose taskoriented flow (toflow), a flow representation tailored for
specific video processing tasks. we design a neural network
with a motion estimation component and a video processing component. these two parts can be jointly trained in a
self-supervised manner to facilitate learning of the proposed
toflow. we demonstrate that toflow outperforms the traditional optical flow on three different video processing tasks:
frame interpolation, video denoising/deblocking, and video
super-resolution. we also introduce vimeo-90k, a large-scale,
high-quality video dataset for video processing to better evaluate the proposed algorithm.
| 1 |
abstract— the new type of mobile ad hoc network which is called vehicular ad hoc networks (vanet) created a fertile
environment for research.
in this research, a protocol particle swarm optimization contention based broadcast (pcbb) is proposed, for fast and effective
dissemination of emergency messages within a geographical area to distribute the emergency message and achieve the safety
system, this research will help the vanet system to achieve its safety goals in intelligent and efficient way.
keywords- pso; vanet; message broadcasting; emergency system; safety system.
| 9 |
abstract
in this paper, we show that there is an o(log k log2 n)-competitive
randomized algorithm for the k-sever problem on any metric space
with n points, which improved the previous best competitive ratio
o(log2 k log3 n log log n) by nikhil bansal et al. (focs 2011, pages 267276).
keywords: k-sever problem; online algorithm; primal-dual method;
randomized algorithm;
| 8 |
abstract—this paper considers a smart grid cyber-security
problem analyzing the vulnerabilities of electric power networks
to false data attacks. the analysis problem is related to a
constrained cardinality minimization problem. the main result
shows that an l1 relaxation technique provides an exact optimal
solution to this cardinality minimization problem. the proposed
result is based on a polyhedral combinatorics argument. it is
different from well-known results based on mutual coherence
and restricted isometry property. the results are illustrated on
benchmarks including the ieee 118-bus and 300-bus systems.
| 5 |
abstract— here we propose using the successor representation
(sr) to accelerate learning in a constructive knowledge system
based on general value functions (gvfs). in real-world settings
like robotics for unstructured and dynamic environments, it is
infeasible to model all meaningful aspects of a system and its
environment by hand due to both complexity and size. instead,
robots must be capable of learning and adapting to changes in
their environment and task, incrementally constructing models
from their own experience. gvfs, taken from the field of
reinforcement learning (rl), are a way of modeling the world
as predictive questions. one approach to such models proposes
a massive network of interconnected and interdependent gvfs,
which are incrementally added over time. it is reasonable
to expect that new, incrementally added predictions can be
learned more swiftly if the learning process leverages knowledge
gained from past experience. the sr provides such a means
of separating the dynamics of the world from the prediction
targets and thus capturing regularities that can be reused across
multiple gvfs. as a primary contribution of this work, we show
that using sr-based predictions can improve sample efficiency
and learning speed in a continual learning setting where new
predictions are incrementally added and learned over time. we
analyze our approach in a grid-world and then demonstrate its
potential on data from a physical robot arm.
| 2 |
abstract
motivated by applications in declarative data analysis, we study datalog z —an extension of positive datalog with arithmetic functions over integers. this language is known to be undecidable, so
we propose two fragments. in limit datalog z predicates are axiomatised to keep minimal/maximal
numeric values, allowing us to show that fact entailment is co ne xp t ime-complete in combined,
and co np-complete in data complexity. moreover,
an additional stability requirement causes the complexity to drop to e xp t ime and pt ime, respectively. finally, we show that stable datalog z can
express many useful data analysis tasks, and so our
results provide a sound foundation for the development of advanced information systems.
| 2 |
abstract—compute and forward (cf) is a promising relaying scheme which, instead of decoding single messages or
forwarding/amplifying information at the relay, decodes linear
combinations of the simultaneously transmitted messages. the
current literature includes several coding schemes and results
on the degrees of freedom in cf, yet for systems with a fixed
number of transmitters and receivers. it is unclear, however, how
cf behaves at the limit of a large number of transmitters.
in this paper, we investigate the performance of cf in that
regime. specifically, we show that as the number of transmitters
grows, cf becomes degenerated, in the sense that a relay prefers
to decode only one (strongest) user instead of any other linear
combination of the transmitted codewords, treating the other
users as noise. moreover, the sum-rate tends to zero as well. this
makes scheduling necessary in order to maintain the superior
abilities cf provides. indeed, under scheduling, we show that
non-trivial linear combinations are chosen, and the sum-rate does
not decay, even without state information at the transmitters and
without interference alignment.
| 7 |
abstract—multi-frame image super-resolution (misr) aims
to fuse information in low-resolution (lr) image sequence to
compose a high-resolution (hr) one, which is applied extensively in many areas recently. different with single image super-resolution (sisr), sub-pixel transitions between multiple
frames introduce additional information, attaching more significance to fusion operator to alleviate the ill-posedness of
misr. for reconstruction-based approaches, the inevitable
projection of reconstruction errors from lr space to hr
space is commonly tackled by an interpolation operator,
however crude interpolation may not fit the natural image
and generate annoying blurring artifacts, especially after fusion operator. in this paper, we propose an end-to-end fast
upscaling technique to replace the interpolation operator,
design upscaling filters in lr space for periodic sub-locations
respectively and shuffle the filter results to derive the final
reconstruction errors in hr space. the proposed fast upscaling technique not only reduce the computational complexity of
the upscaling operation by utilizing shuffling operation to
avoid complex operation in hr space, but also realize superior performance with fewer blurring artifacts. extensive experimental results demonstrate the effectiveness and efficiency
of the proposed technique, whilst, combining the proposed
technique with bilateral total variation (btv) regularization,
the misr approach outperforms state-of-the-art methods.
index terms—multi-frame super-resolution, upscaling
technique, bilateral total variation, shuffling operation
| 1 |
abstract. let m be a module over a commutative ring r. in this paper,
we continue our study of annihilating-submodule graph ag(m ) which was
introduced in (the zariski topology-graph of modules over commutative rings,
comm. algebra., 42 (2014), 3283–3296). ag(m ) is a (undirected) graph in
which a nonzero submodule n of m is a vertex if and only if there exists
a nonzero proper submodule k of m such that n k = (0), where n k, the
product of n and k, is defined by (n : m )(k : m )m and two distinct vertices
n and k are adjacent if and only if n k = (0). we prove that if ag(m ) is a
tree, then either ag(m ) is a star graph or a path of order 4 and in the latter
case m ∼
= f × s, where f is a simple module and s is a module with a unique
non-trivial submodule. moreover, we prove that if m is a cyclic module with
at least three minimal prime submodules, then gr(ag(m )) = 3 and for every
cyclic module m , cl(ag(m )) ≥ |m in(m )|.
| 0 |
abstract. this paper studies nonparametric series estimation and inference
for the effect of a single variable of interest x on an outcome y in the presence of potentially high-dimensional conditioning variables z. the context is
an additively separable model e[y|x, z] = g0 (x) + h0 (z). the model is highdimensional in the sense that the series of approximating functions for h0 (z)
can have more terms than the sample size, thereby allowing z to have potentially very many measured characteristics. the model is required to be
approximately sparse: h0 (z) can be approximated using only a small subset
of series terms whose identities are unknown. this paper proposes an estimation and inference method for g0 (x) called post-nonparametric double
selection which is a generalization of post-double selection. standard rates
of convergence and asymptotic normality for the estimator are shown to hold
uniformly over a large class of sparse data generating processes. a simulation
study illustrates finite sample estimation properties of the proposed estimator
and coverage properties of the corresponding confidence intervals. finally, an
empirical application estimating convergence in gdp in a country-level crosssection demonstrates the practical implementation of the proposed method.
key words: additive nonparametric models, high-dimensional sparse regression, inference under imperfect model selection. jel codes: c1.
| 10 |
abstract
we investigate the asymptotic distributions of coordinates of regression m-estimates
in the moderate p/n regime, where the number of covariates p grows proportionally with the sample size n. under appropriate regularity conditions, we establish the coordinate-wise asymptotic normality of regression m-estimates assuming
a fixed-design matrix. our proof is based on the second-order poincaré inequality
(chatterjee, 2009) and leave-one-out analysis (el karoui et al., 2011). some relevant
examples are indicated to show that our regularity conditions are satisfied by a broad
class of design matrices. we also show a counterexample, namely the anova-type
design, to emphasize that the technical assumptions are not just artifacts of the
proof. finally, the numerical experiments confirm and complement our theoretical
results.
| 10 |
abstract
in this paper we provide the complete classification of kleinian
group of hausdorff dimensions less than 1. in particular, we prove
that every purely loxodromic kleinian groups of hausdorff dimension
< 1 is a classical schottky group. this upper bound is sharp. as an
application, the result of [4] then implies that every closed riemann
surface is uniformizable by a classical schottky group. the proof relies
on the result of hou [6], and space of rectifiable γ-invariant closed
curves.
| 4 |
abstract
string searching consists in locating a substring in a longer text, and two strings can be
approximately equal (various similarity measures such as the hamming distance exist).
strings can be defined very broadly, and they usually contain natural language and
biological data (dna, proteins), but they can also represent other kinds of data such as
music or images.
one solution to string searching is to use online algorithms which do not preprocess
the input text, however, this is often infeasible due to the massive sizes of modern data
sets. alternatively, one can build an index, i.e. a data structure which aims to speed up
string matching queries. the indexes are divided into full-text ones which operate on
the whole input text and can answer arbitrary queries and keyword indexes which store
a dictionary of individual words. in this work, we present a literature review for both
index categories as well as our contributions (which are mostly practice-oriented).
the first contribution is the fm-bloated index, which is a modification of the well-known
fm-index (a compressed, full-text index) that trades space for speed. in our approach,
the count table and the occurrence lists store information about selected q-grams in
addition to the individual characters. two variants are described, namely one using
o(n log2 n) bits of space with o(m + log m log log n) average query time, and one with
linear space and o(m log log n) average query time, where n is the input text length
and m is the pattern length. we experimentally show that a significant speedup can be
achieved by operating on q-grams (albeit at the cost of very high space requirements,
hence the name “bloated”).
in the category of keyword indexes we present the so-called split index, which can efficiently solve the k-mismatches problem, especially for 1 error. our implementation in the
c++ language is focused mostly on data compaction, which is beneficial for the search
speed (by being cache friendly). we compare our solution with other algorithms and
we show that it is faster when the hamming distance is used. query times in the order
of 1 microsecond were reported for one mismatch for a few-megabyte natural language
dictionary on a medium-end pc.
a minor contribution includes string sketches which aim to speed up approximate string
comparison at the cost of additional space (o(1) per string). they can be used in
the context of keyword indexes in order to deduce that two strings differ by at least k
mismatches with the use of fast bitwise operations rather than an explicit verification.
| 8 |
abstract. this is an exposition on the general neron desingularization and its
applications. we end with a recent constructive form of this desingularization in
dimension one.
key words : artin approximation, neron desingularization, bass-quillen conjecture, quillen’s question, smooth morphisms, regular morphisms, smoothing ring
morphisms.
2010 mathematics subject classification: primary 1302, secondary 13b40, 13h05,
13h10, 13j05, 13j10, 13j15, 14b07, 14b12, 14b25.
| 0 |
abstract. an axiomatic characterization of buildings of type c3 due to tits is used to prove
that any cohomogeneity two polar action of type c3 on a positively curved simply connected
manifold is equivariantly diffeomorphic to a polar action on a rank one symmetric space. this
includes two actions on the cayley plane whose associated c3 type geometry is not covered by
a building.
| 4 |
abstract—this paper introduces the time synchronization attack rejection and mitigation (tsarm) technique for
time synchronization attacks (tsas) over the global positioning system (gps). the technique estimates the clock
bias and drift of the gps receiver along with the possible
attack contrary to previous approaches. having estimated
the time instants of the attack, the clock bias and drift of
the receiver are corrected. the proposed technique is computationally efficient and can be easily implemented in real
time, in a fashion complementary to standard algorithms
for position, velocity, and time estimation in off-the-shelf
receivers. the performance of this technique is evaluated
on a set of collected data from a real gps receiver. our
method renders excellent time recovery consistent with the
application requirements. the numerical results demonstrate that the tsarm technique outperforms competing
approaches in the literature.
index terms—global positioning system, time synchronization attack, spoofing detection
| 3 |
abstract—we show that the spectral efficiency of a direct
detection transmission system is at most 1 bit/s/hz less than the
spectral efficiency of a system employing coherent detection with
the same modulation format. correspondingly, the capacity per
complex degree of freedom in systems using direct detection is
lower by at most 1 bit.
| 7 |
abstract
three dimensional digital model of a representative human kidney is needed
for a surgical simulator that is capable of simulating a laparoscopic surgery
involving kidney. buying a three dimensional computer model of a
representative human kidney, or reconstructing a human kidney from an
image sequence using commercial software, both involve (sometimes
significant amount of) money. in this paper, author has shown that one can
obtain a three dimensional surface model of human kidney by making use of
images from the visible human data set and a few free software packages
(imagej, itk-snap, and meshlab in particular). images from the visible
human data set, and the software packages used here, both do not cost
anything. hence, the practice of extracting the geometry of a representative
human kidney for free, as illustrated in the present work, could be a free
alternative to the use of expensive commercial software or to the purchase of a
digital model.
keywords
visible; human; data; set; kidney; surface; model; free.
| 5 |
abstract. we show that jn , the stanley-reisner ideal of the n-cycle, has a free resolution
supported on the (n − 3)-dimensional simplicial associahedron an . this resolution is not
minimal for n ≥ 6; in this case the betti numbers of jn are strictly smaller than the f -vector
of an . we show that in fact the betti numbers βd of jn are in bijection with the number
of standard young tableaux of shape (d + 1, 2, 1n−d−3 ). this complements the fact that
the number of (d − 1)-dimensional faces of an are given by the number of standard young
tableaux of (super)shape (d + 1, d + 1, 1n−d−3 ); a bijective proof of this result was first
provided by stanley. an application of discrete morse theory yields a cellular resolution of
jn that we show is minimal at the first syzygy. we furthermore exhibit a simple involution
on the set of associahedron tableaux with fixed points given by the betti tableaux, suggesting
a morse matching and in particular a poset structure on these objects.
| 0 |
abstract
a direct adaptive feedforward control method for tracking repeatable runout (rro) in bit patterned media recording
(bpmr) hard disk drives (hdd) is proposed. the technique estimates the system parameters and the residual rro simultaneously and constructs a feedforward signal based on a known regressor. an improved version of the proposed algorithm to avoid
matrix inversion and reduce computation complexity is given.
results for both matlab simulation and digital signal processor (dsp) implementation are provided to verify the effectiveness
of the proposed algorithm.
1
| 3 |
abstract. websites today routinely combine javascript from multiple sources, both trusted and untrusted. hence, javascript security is
of paramount importance. a specific interesting problem is information
flow control (ifc) for javascript. in this paper, we develop, formalize
and implement a dynamic ifc mechanism for the javascript engine of a
production web browser (specifically, safari’s webkit engine). our ifc
mechanism works at the level of javascript bytecode and hence leverages years of industrial effort on optimizing both the source to bytecode
compiler and the bytecode interpreter. we track both explicit and implicit flows and observe only moderate overhead. working with bytecode
results in new challenges including the extensive use of unstructured
control flow in bytecode (which complicates lowering of program context
taints), unstructured exceptions (which complicate the matter further)
and the need to make ifc analysis permissive. we explain how we address these challenges, formally model the javascript bytecode semantics
and our instrumentation, prove the standard property of terminationinsensitive non-interference, and present experimental results on an optimized prototype.
keywords: dynamic information flow control, javascript bytecode, taint
tracking, control flow graphs, immediate post-dominator analysis
| 6 |
abstract—this paper studies the solution of joint energy
storage (es) ownership sharing between multiple shared facility
controllers (sfcs) and those dwelling in a residential community.
the main objective is to enable the residential units (rus) to
decide on the fraction of their es capacity that they want to
share with the sfcs of the community in order to assist them
storing electricity, e.g., for fulfilling the demand of various shared
facilities. to this end, a modified auction-based mechanism is
designed that captures the interaction between the sfcs and
the rus so as to determine the auction price and the allocation
of es shared by the rus that governs the proposed joint es
ownership. the fraction of the capacity of the storage that each
ru decides to put into the market to share with the sfcs and
the auction price are determined by a noncooperative stackelberg
game formulated between the rus and the auctioneer. it is shown
that the proposed auction possesses the incentive compatibility
and the individual rationality properties, which are leveraged via
the unique stackelberg equilibrium (se) solution of the game.
numerical experiments are provided to confirm the effectiveness
of the proposed scheme.
index terms—smart grid, shared energy storage, auction
theory, stackelberg equilibrium, strategy-proof, incentive compatibility.
| 3 |
abstract. for a homogeneous polynomial with a non-zero discriminant, we
interpret direct sum decomposability of the polynomial in terms of factorization properties of the macaulay inverse system of its milnor algebra. this
leads to an if-and-only-if criterion for direct sum decomposability of such a
polynomial, and to an algorithm for computing direct sum decompositions
over any field, either of characteristic 0 or of sufficiently large positive characteristic, for which polynomial factorization algorithms exist. we also give
simple necessary criteria for direct sum decomposability of arbitrary homogeneous polynomials over arbitrary fields and apply them to prove that many
interesting classes of homogeneous polynomials are not direct sums.
| 0 |
abstract
deep learning on graphs has become a popular
research topic with many applications. however,
past work has concentrated on learning graph embedding tasks, which is in contrast with advances
in generative models for images and text. is it
possible to transfer this progress to the domain of
graphs? we propose to sidestep hurdles associated with linearization of such discrete structures
by having a decoder output a probabilistic fullyconnected graph of a predefined maximum size
directly at once. our method is formulated as
a variational autoencoder. we evaluate on the
challenging task of molecule generation.
| 9 |
abstract. a single qubit may be represented on the bloch sphere or
similarly on the 3-sphere s 3 . our goal is to dress this correspondence by
converting the language of universal quantum computing (uqc) to that
of 3-manifolds. a magic state and the pauli group acting on it define a
model of uqc as a povm that one recognizes to be a 3-manifold m 3 . e.
g., the d-dimensional povms defined from subgroups of finite index of
the modular group p sl(2, z) correspond to d-fold m 3 - coverings over
the trefoil knot. in this paper, one also investigates quantum information
on a few ‘universal’ knots and links such as the figure-of-eight knot,
the whitehead link and borromean rings , making use of the catalog
of platonic manifolds available on snappy [4] . further connections
between povms based uqc and m 3 ’s obtained from dehn fillings are
explored.
pacs: 03.67.lx, 03.65.wj, 03.65.aa, 02.20.-a, 02.10.kn, 02.40.pc, 02.40.sf
msc codes: 81p68, 81p50, 57m25, 57r65, 14h30, 20e05, 57m12
keywords: quantum computation, ic-povms, knot theory, three-manifolds, branch
coverings, dehn surgeries.
| 4 |
abstract
we present an accurate and efficient discretization approach for
the adaptive discretization of typical model equations employed in
numerical weather prediction. a semi-lagrangian approach is combined with the tr-bdf2 semi-implicit time discretization method
and with a spatial discretization based on adaptive discontinuous finite elements. the resulting method has full second order accuracy
in time and can employ polynomial bases of arbitrarily high degree in
space, is unconditionally stable and can effectively adapt the number
of degrees of freedom employed in each element, in order to balance accuracy and computational cost. the p−adaptivity approach employed
does not require remeshing, therefore it is especially suitable for applications, such as numerical weather prediction, in which a large number
of physical quantities are associated with a given mesh. furthermore,
although the proposed method can be implemented on arbitrary unstructured and nonconforming meshes, even its application on simple
cartesian meshes in spherical coordinates can cure effectively the pole
problem by reducing the polynomial degree used in the polar elements.
numerical simulations of classical benchmarks for the shallow water
and for the fully compressible euler equations validate the method
and demonstrate its capability to achieve accurate results also at large
courant numbers, with time steps up to 100 times larger than those
of typical explicit discretizations of the same problems, while reducing
the computational cost thanks to the adaptivity algorithm.
| 5 |
abstract
urban rail transit often operates with high service frequencies to serve heavy passenger demand
during rush hours. such operations can be delayed by train congestion, passenger congestion, and
the interaction of the two. delays are problematic for many transit systems, as they become
amplified by this interactive feedback. however, there are no tractable models to describe
transit systems with dynamical delays, making it difficult to analyze the management strategies of
congested transit systems in general, solvable ways. to fill this gap, this article proposes simple yet
physical and dynamic models of urban rail transit. first, a fundamental diagram of a transit system
(3-dimensional relation among train-flow, train-density, and passenger-flow) is analytically derived
by considering the physical interactions in delays and congestion based on microscopic operation
principles. then, a macroscopic model of a transit system with time-varying demand and supply is
developed as a continuous approximation based on the fundamental diagram. finally, the accuracy
of the macroscopic model is investigated using a microscopic simulation, and applicable range of
the model is confirmed.
| 3 |
abstract. in this paper, we formulate an analogue of waring’s problem for an algebraic group g. at the field level we consider a morphism
of varieties f : a1 → g and ask whether every element of g(k) is the
product of a bounded number of elements f (a1 (k)) = f (k). we give
an affirmative answer when g is unipotent and k is a characteristic zero
field which is not formally real.
the idea is the same at the integral level, except one must work with
schemes, and the question is whether every element in a finite index
subgroup of g(o) can be written as a product of a bounded number of
elements of f (o). we prove this is the case when g is unipotent and o
is the ring of integers of a totally imaginary number field.
| 4 |
abstract
we study the following multiagent variant of the knapsack problem. we are given a
set of items, a set of voters, and a value of the budget; each item is endowed with a cost
and each voter assigns to each item a certain value. the goal is to select a subset of items
with the total cost not exceeding the budget, in a way that is consistent with the voters’
preferences. since the preferences of the voters over the items can vary significantly,
we need a way of aggregating these preferences, in order to select the socially most
preferred valid knapsack. we study three approaches to aggregating voters preferences,
which are motivated by the literature on multiwinner elections and fair allocation. this
way we introduce the concepts of individually best, diverse, and fair knapsack. we study
computational complexity (including parameterized complexity, and complexity under
restricted domains) of computing the aforementioned concepts of multiagent knapsacks.
| 8 |
abstract. we describe triples and systems, expounded as an axiomatic algebraic umbrella theory for
classical algebra, tropical algebra, hyperfields, and fuzzy rings.
| 0 |
abstract
in this paper, random forests are proposed for operating devices diagnostics in the presence of a variable number of features. in various
contexts, like large or difficult-to-access monitored areas, wired sensor
networks providing features to achieve diagnostics are either very costly
to use or totally impossible to spread out. using a wireless sensor network
can solve this problem, but this latter is more subjected to flaws. furthermore, the networks’ topology often changes, leading to a variability
in quality of coverage in the targeted area. diagnostics at the sink level
must take into consideration that both the number and the quality of the
provided features are not constant, and that some politics like scheduling
or data aggregation may be developed across the network. the aim of
this article is (1) to show that random forests are relevant in this context,
due to their flexibility and robustness, and (2) to provide first examples
of use of this method for diagnostics based on data provided by a wireless
sensor network.
| 2 |
abstract
deep convolutional neural networks (cnns) are more powerful
than deep neural networks (dnn), as they are able to better reduce
spectral variation in the input signal. this has also been confirmed
experimentally, with cnns showing improvements in word error
rate (wer) between 4-12% relative compared to dnns across a variety of lvcsr tasks. in this paper, we describe different methods
to further improve cnn performance. first, we conduct a deep analysis comparing limited weight sharing and full weight sharing with
state-of-the-art features. second, we apply various pooling strategies that have shown improvements in computer vision to an lvcsr
speech task. third, we introduce a method to effectively incorporate
speaker adaptation, namely fmllr, into log-mel features. fourth,
we introduce an effective strategy to use dropout during hessian-free
sequence training. we find that with these improvements, particularly with fmllr and dropout, we are able to achieve an additional
2-3% relative improvement in wer on a 50-hour broadcast news
task over our previous best cnn baseline. on a larger 400-hour
bn task, we find an additional 4-5% relative improvement over our
previous best cnn baseline.
1. introduction
deep neural networks (dnns) are now the state-of-the-art in acoustic modeling for speech recognition, showing tremendous improvements on the order of 10-30% relative across a variety of small and
large vocabulary tasks [1]. recently, deep convolutional neural networks (cnns) [2, 3] have been explored as an alternative type of
neural network which can reduce translational variance in the input
signal. for example, in [4], deep cnns were shown to offer a 4-12%
relative improvement over dnns across different lvcsr tasks. the
cnn architecture proposed in [4] was a somewhat vanilla architecture that had been used in computer vision for many years. the goal
of this paper is to analyze and justify what is an appropriate cnn architecture for speech, and to investigate various strategies to improve
cnn results further.
first, the architecture proposed in [4] used multiple convolutional layers with full weight sharing (fws), which was found to be
beneficial compared to a single fws convolutional layer. because
the locality of speech is known ahead of time, [3] proposed the use
of limited weight sharing (lws) for cnns in speech. while lws
has the benefit that it allows each local weight to focus on parts of
the signal which are most confusable, previous work with lws had
just focused on a single lws layer [3], [5]. in this work, we do a
detailed analysis and compare multiple layers of fws and lws.
| 9 |
abstract— this paper presents a practical approach for
identifying unknown mechanical parameters, such as mass
and friction models of manipulated rigid objects or actuated
robotic links, in a succinct manner that aims to improve the
performance of policy search algorithms. key features of this
approach are the use of off-the-shelf physics engines and the
adaptation of a black-box bayesian optimization framework
for this purpose. the physics engine is used to reproduce in
simulation experiments that are performed on a real robot,
and the mechanical parameters of the simulated system are
automatically fine-tuned so that the simulated trajectories
match with the real ones. the optimized model is then used for
learning a policy in simulation, before safely deploying it on the
real robot. given the well-known limitations of physics engines
in modeling real-world objects, it is generally not possible to
find a mechanical model that reproduces in simulation the real
trajectories exactly. moreover, there are many scenarios where
a near-optimal policy can be found without having a perfect
knowledge of the system. therefore, searching for a perfect
model may not be worth the computational effort in practice.
the proposed approach aims then to identify a model that
is good enough to approximate the value of a locally optimal
policy with a certain confidence, instead of spending all the
computational resources on searching for the most accurate
model. empirical evaluations, performed in simulation and on
a real robotic manipulation task, show that model identification
via physics engines can significantly boost the performance of
policy search algorithms that are popular in robotics, such as
trpo, power and pilco, with no additional real-world data.
| 2 |
abstract
prior to the financial crisis mortgage securitization models increased in sophistication as did products
built to insure against losses. layers of complexity formed upon a foundation that could not support
it and as the foundation crumbled the housing market followed. that foundation was the gaussian
copula which failed to correctly model failure-time correlations of derivative securities in duress. in
retirement, surveys suggest the greatest fear is running out of money and as retirement decumulation
models become increasingly sophisticated, large financial firms and robo-advisors may guarantee
their success. similar to an investment bank failure the event of retirement ruin is driven by outliers
and correlations in times of stress. it would be desirable to have a foundation able to support the
increased complexity before it forms however the industry currently relies upon similar gaussian (or
lognormal) dependence structures. we propose a multivariate density model having fixed marginals
that is tractable and fits data which are skewed, heavy-tailed, multimodal, i.e., of arbitrary complexity
allowing for a rich correlation structure. it is also ideal for stress-testing a retirement plan by fitting
historical data seeded with black swan events. a preliminary section reviews all concepts before they
are used and fully documented c/c++ source code is attached making the research self-contained.
lastly, we take the opportunity to challenge existing retirement finance dogma and also review some
recent criticisms of retirement ruin probabilities and their suggested replacement metrics.
table of contents
introduction ............................................................................................................................................ 1
i. literature review ............................................................................................................................. 2
ii. preliminaries.................................................................................................................................... 3
iii. univariate density modeling ....................................................................................................... 29
iv. multivariate density modeling w/out covariances ..................................................................... 37
v. multivariate density modeling w/covariances ............................................................................ 40
vi. expense-adjusted real compounding return on a diversified portfolio .................................. 47
vii. retirement portfolio optimization ............................................................................................. 49
viii. conclusion ................................................................................................................................ 51
references ............................................................................................................................................ 52
data sources/retirement surveys........................................................................................................ 55
ix. appendix with source code ........................................................................................................ 56
keywords: variance components, em algorithm, ecme algorithm, maximum likelihood, pdf,
cdf, information criteria, finite mixture model, constrained optimization, retirement decumulation,
probability of ruin, static/dynamic glidepaths, financial crisis
contact: [email protected]
| 5 |
abstract
we construct new examples of cat(0) groups containing non finitely
presented subgroups that are of type f p2 , these cat(0) groups do not
contain copies of z3 . we also give a construction of groups which are of
type fn but not fn`1 with no free abelian subgroups of rank greater than
r n3 s.
| 4 |
abstract— in this paper, we propose an automated computer
platform for the purpose of classifying electroencephalography
(eeg) signals associated with left and right hand movements
using a hybrid system that uses advanced feature extraction
techniques and machine learning algorithms. it is known that
eeg represents the brain activity by the electrical voltage
fluctuations along the scalp, and brain-computer interface (bci)
is a device that enables the use of the brain’s neural activity to
communicate with others or to control machines, artificial limbs,
or robots without direct physical movements. in our research
work, we aspired to find the best feature extraction method that
enables the differentiation between left and right executed fist
movements through various classification algorithms. the eeg
dataset used in this research was created and contributed to
physionet by the developers of the bci2000 instrumentation
system. data was preprocessed using the eeglab matlab
toolbox and artifacts removal was done using aar. data was
epoched on the basis of event-related (de) synchronization
(erd/ers) and movement-related cortical potentials (mrcp)
features. mu/beta rhythms were isolated for the erd/ers
analysis and delta rhythms were isolated for the mrcp analysis.
the independent component analysis (ica) spatial filter was
applied on related channels for noise reduction and isolation of
both artifactually and neutrally generated eeg sources. the
final feature vector included the erd, ers, and mrcp features
in addition to the mean, power and energy of the activations of
the resulting independent components (ics) of the epoched
feature datasets. the datasets were inputted into two machinelearning algorithms: neural networks (nns) and support vector
machines (svms). intensive experiments were carried out and
optimum classification performances of 89.8 and 97.1 were
obtained using nn and svm, respectively. this research shows
that this method of feature extraction holds some promise for the
classification of various pairs of motor movements, which can be
used in a bci context to mentally control a computer or machine.
keywords—eeg; bci; ica; mrcp; erd/ers; machine
learning; nn; svm
| 9 |
abstract
we prove a new and general concentration inequality for the excess risk in least-squares regression
with random design and heteroscedastic noise. no specific structure is required on the model, except the
existence of a suitable function that controls the local suprema of the empirical process. so far, only the
case of linear contrast estimation was tackled in the literature with this level of generality on the model.
we solve here the case of a quadratic contrast, by separating the behavior of a linearized empirical process
and the empirical process driven by the squares of functions of models.
keywords: regression, least-squares, excess risk, empirical process, concentration inequality, margin
relation.
ams2000 : 62g08, 62j02, 60e15.
| 10 |
abstract. let x be a building, identified with its davis realisation. in this
paper, we provide for each x ∈ x and each η in the visual boundary ∂x of
x a description of the geodesic ray bundle geo(x, η), namely, of the union of
all combinatorial geodesic rays (corresponding to infinite minimal galleries in
the chamber graph of x) starting from x and pointing towards η. when x is
locally finite and hyperbolic, we show that the symmetric difference between
geo(x, η) and geo(y, η) is always finite, for x, y ∈ x and η ∈ ∂x. this gives
a positive answer to a question of huang, sabok and shinko in the setting of
buildings. combining their results with a construction of bourdon, we obtain
examples of hyperbolic groups g with kazhdan’s property (t) such that the
g-action on its gromov boundary is hyperfinite.
| 4 |
abstract. a feature-oriented product line is a family of programs that share a
common set of features. a feature implements a stakeholder’s requirement, represents a design decision and configuration option and, when added to a program,
involves the introduction of new structures, such as classes and methods, and the
refinement of existing ones, such as extending methods. with feature-oriented
decomposition, programs can be generated, solely on the basis of a user’s selection of features, by the composition of the corresponding feature code. a key
challenge of feature-oriented product line engineering is how to guarantee the
correctness of an entire feature-oriented product line, i.e., of all of the member
programs generated from different combinations of features. as the number of
valid feature combinations grows progressively with the number of features, it
is not feasible to check all individual programs. the only feasible approach is
to have a type system check the entire code base of the feature-oriented product
line. we have developed such a type system on the basis of a formal model of a
feature-oriented java-like language. we demonstrate that the type system ensures
that every valid program of a feature-oriented product line is well-typed and that
the type system is complete.
| 6 |
abstract. a multigraph is a nonsimple graph which is permitted to have
multiple edges, that is, edges that have the same end nodes. we introduce
the concept of spanning simplicial complexes ∆s (g) of multigraphs g, which
provides a generalization of spanning simplicial complexes of associated
simple graphs. we give first the characterization of all spanning trees of a
r
uni-cyclic multigraph un,m
with n edges including r multiple edges within
and outside the cycle of length m. then, we determine the facet ideal
r
r
if (∆s (un,m
)) of spanning simplicial complex ∆s (un,m
) and its primary
decomposition. the euler characteristic is a well-known topological and
homotopic invariant to classify surfaces. finally, we device a formula for
r
euler characteristic of spanning simplicial complex ∆s (un,m
).
key words: multigraph, spanning simplicial complex, euler characteristic.
2010 mathematics subject classification: primary 05e25, 55u10, 13p10,
secondary 06a11, 13h10.
| 0 |
abstract. it has been conjectured by eisenbud, green and harris that if i is
a homogeneous ideal in k[x1 , . . . , xn ] containing a regular sequence f1 , . . . , fn
of degrees deg(fi ) = ai , where 2 ≤ a1 ≤ ⋯ ≤ an , then there is a homogeneous
an
1
ideal j containing xa
1 , . . . , xn with the same hilbert function. in this paper we prove the eisenbud-green-harris conjecture when fi splits into linear
factors for all i.
| 0 |
abstract. a group is tubular if it acts on a tree with z2 vertex stabilizers and
z edge stabilizers. we prove that a tubular group is virtually special if and only
if it acts freely on a locally finite cat(0) cube complex. furthermore, we prove
that if a tubular group acts freely on a finite dimensional cat(0) cube complex,
then it virtually acts freely on a three dimensional cat(0) cube complex.
| 4 |
abstract—successful fine-grained image classification methods
learn subtle details between visually similar (sub-)classes, but
the problem becomes significantly more challenging if the details
are missing due to low resolution. encouraged by the recent
success of convolutional neural network (cnn) architectures
in image classification, we propose a novel resolution-aware
deep model which combines convolutional image super-resolution
and convolutional fine-grained classification into a single model
in an end-to-end manner. extensive experiments on multiple
benchmarks demonstrate that the proposed model consistently
performs better than conventional convolutional networks on
classifying fine-grained object classes in low-resolution images.
index terms—fine-grained image classification, super resolution convoluational neural networks, deep learning
| 1 |
abstract
this paper explains genetic algorithm for novice in this field. basic philosophy of genetic
algorithm and its flowchart are described. step by step numerical computation of genetic
algorithm for solving simple mathematical equality problem will be briefly explained.
| 9 |
abstract. we construct 2-generator non-hopfian groups gm , m = 3, 4, 5, . . . ,
where each gm has a specific presentation gm = ha, b | urm,0 = urm,1 = urm,2 =
· · · = 1i which satisfies small cancellation conditions c(4) and t (4). here, urm,i
is the single relator of the upper presentation of the 2-bridge link group of slope
rm,i , where rm,0 = [m + 1, m, m] and rm,i = [m + 1, m − 1, (i − 1)hmi, m + 1, m]
in continued fraction expansion for every integer i ≥ 1.
| 4 |
abstract—we propose an energy-efficient procedure for
transponder configuration in fmf-based elastic optical networks
in which quality of service and physical constraints are guaranteed and joint optimization of transmit optical power, temporal,
spatial and spectral variables are addressed. we use geometric
convexification techniques to provide convex representations for
quality of service, transponder power consumption and transponder configuration problem. simulation results show that our
convex formulation is considerably faster than its mixed-integer
nonlinear counterpart and its ability to optimize transmit optical
power reduces total transponder power consumption up to 32%.
we also analyze the effect of mode coupling and number of
available modes on power consumption of different network
elements.
keywords—convex optimization, green communication, elastic
optical networks, few-mode fibers, mode coupling.
| 7 |
abstract—in this paper, a new video classification
methodology is proposed which can be applied in both first and
third person videos. the main idea behind the proposed
strategy is to capture complementary information of
appearance and motion efficiently by performing two
independent streams on the videos. the first stream is aimed to
capture long-term motions from shorter ones by keeping track
of how elements in optical flow images have changed over time.
optical flow images are described by pre-trained networks
that have been trained on large scale image datasets. a set of
multi-channel time series are obtained by aligning descriptions
beside each other. for extracting motion features from these
time series, pot representation method plus a novel pooling
operator is followed due to several advantages. the second
stream is accomplished to extract appearance features which
are vital in the case of video classification. the proposed
method has been evaluated on both first and third-person
datasets and results present that the proposed methodology
reaches the state of the art successfully.
| 1 |
abstract—assignment of critical missions to unmanned aerial
vehicles (uav) is bound to widen the grounds for adversarial intentions in the cyber domain, potentially ranging from
disruption of command and control links to capture and use
of airborne nodes for kinetic attacks. ensuring the security
of electronic and communications in multi-uav systems is of
paramount importance for their safe and reliable integration
with military and civilian airspaces. over the past decade, this
active field of research has produced many notable studies and
novel proposals for attacks and mitigation techniques in uav
networks. yet, the generic modeling of such networks as typical
manets and isolated systems has left various vulnerabilities out
of the investigative focus of the research community. this paper
aims to emphasize on some of the critical challenges in securing
uav networks against attacks targeting vulnerabilities specific to
such systems and their cyber-physical aspects.
index terms—uav, cyber-physical security, vulnerabilities
| 3 |
abstract
| 7 |
abstract
the triad census is an important approach to understand local structure in network
science, providing comprehensive assessments of the observed relational configurations between triples of actors in a network. however, researchers are often interested
in combinations of relational and categorical nodal attributes. in this case, it is desirable to account for the label, or color, of the nodes in the triad census. in this paper,
we describe an efficient algorithm for constructing the colored triad census, based, in
part, on existing methods for the classic triad census. we evaluate the performance
of the algorithm using empirical and simulated data for both undirected and directed
graphs. the results of the simulation demonstrate that the proposed algorithm reduces computational time by approximately 17,400% over the naı̈ve approach. we
also apply the colored triad census to the zachary karate club network dataset. we
simultaneously show the efficiency of the algorithm, and a way to conduct a statistical test on the census by forming a null distribution from 1, 000 realizations of a
mixing-matrix conditioned graph and comparing the observed colored triad counts
to the expected. from this, we demonstrate the method’s utility in our discussion
of results about homophily, heterophily, and bridging, simultaneously gained via the
colored triad census. in sum, the proposed algorithm for the colored triad census
brings novel utility to social network analysis in an efficient package.
keywords: triad census, labeled graphs, simulation
1. introduction
the triad census is an important approach towards understanding local network
structure. ? ] first presented the 16 isomorphism classes of structurally unique triads
preprint submitted to xxx
| 8 |
abstract
we propose several sampling architectures for the efficient acquisition of an ensemble of correlated
signals. we show that without prior knowledge of the correlation structure, each of our architectures
(under different sets of assumptions) can acquire the ensemble at a sub-nyquist rate. prior to sampling,
the analog signals are diversified using simple, implementable components. the diversification is achieved
by injecting types of “structured randomness” into the ensemble, the result of which is subsampled.
for reconstruction, the ensemble is modeled as a low-rank matrix that we have observed through an
(undetermined) set of linear equations. our main results show that this matrix can be recovered using
a convex program when the total number of samples is on the order of the intrinsic degree of freedom of
the ensemble — the more heavily correlated the ensemble, the fewer samples are needed.
to motivate this study, we discuss how such ensembles arise in the context of array processing.
| 7 |
abstract. the extraction of fibers from dmri data typically produces a
large number of fibers, it is common to group fibers into bundles. to this
end, many specialized distance measures, such as mcp, have been used
for fiber similarity. however, these distance based approaches require
point-wise correspondence and focus only on the geometry of the fibers.
recent publications have highlighted that using microstructure measures
along fibers improves tractography analysis. also, many neurodegenerative diseases impacting white matter require the study of microstructure
measures as well as the white matter geometry. motivated by these, we
propose to use a novel computational model for fibers, called functional
varifolds, characterized by a metric that considers both the geometry
and microstructure measure (e.g. gfa) along the fiber pathway. we use
it to cluster fibers with a dictionary learning and sparse coding-based
framework, and present a preliminary analysis using hcp data.
| 1 |
abstract
the slower is faster (sif) effect occurs when a system performs worse as its components try to do better. thus, a moderate individual efficiency actually leads to a
better systemic performance. the sif effect takes place in a variety of phenomena.
we review studies and examples of the sif effect in pedestrian dynamics, vehicle traffic, traffic light control, logistics, public transport, social dynamics, ecological systems,
and adaptation. drawing on these examples, we generalize common features of the sif
effect and suggest possible future lines of research.
| 9 |
abstract
let x be a negatively curved symmetric space and γ a non-cocompact lattice in isom(x). we show that
small, parabolic-preserving deformations of γ into the isometry group of any negatively curved symmetric
space containing x remain discrete and faithful (the cocompact case is due to guichard). this applies
in particular to a version of johnson-millson bending deformations, providing for all n infnitely many noncocompact lattices in so(n, 1) which admit discrete and faithful deformations into su(n, 1). we also produce
deformations of the figure-8 knot group into su(3, 1), not of bending type, to which the result applies.
| 4 |
abstract feature, implying that our results may generalize to feature
selectivity, we do not examine feature selectivity in this work.
| 2 |
abstract
the goal of this work is to extend the standard persistent homology pipeline for
exploratory data analysis to the 2-d persistence setting, in a practical, computationally
efficient way. to this end, we introduce rivet, a software tool for the visualization of
2-d persistence modules, and present mathematical foundations for this tool. rivet
provides an interactive visualization of the barcodes of 1-d affine slices of a 2-d persistence module m . it also computes and visualizes the dimension of each vector space in
m and the bigraded betti numbers of m . at the heart of our computational approach
is a novel data structure based on planar line arrangements, on which we can perform
fast queries to find the barcode of any slice of m . we present an efficient algorithm
for constructing this data structure and establish bounds on its complexity.
| 0 |
abstract
in this paper, a new approach to solve the cubic b-spline curve fitting problem is
presented based on a meta-heuristic algorithm called “dolphin echolocation”. the
method minimizes the proximity error value of the selected nodes that measured
using the least squares method and the euclidean distance method of the new curve
generated by the reverse engineering. the results of the proposed method are
compared with the genetic algorithm. as a result, this new method seems to be
successful.
keywords: b-spline curve approximation, cubic b-spline, data parameterization on b-spline, dolphin
echolocation algorithm, knot adjustment
| 9 |
abstract
in the classic integer programming (ip) problem, the objective is to decide whether, for a
given m × n matrix a and an m-vector b = (b1 , . . . , bm ), there is a non-negative integer n-vector
x such that ax = b. solving (ip) is an important step in numerous algorithms and it is important
to obtain an understanding of the precise complexity of this problem as a function of natural
parameters of the input.
two significant results in this line of research are the pseudo-polynomial time algorithms
for (ip) when the number of constraints is a constant [papadimitriou, j. acm 1981] and when
the branch-width of the column-matroid corresponding to the constraint matrix is a constant
[cunningham and geelen, ipco 2007]. in this paper, we prove matching upper and lower bounds
for (ip) when the path-width of the corresponding column-matroid is a constant. these lower
bounds provide evidence that the algorithm of cunningham and geelen, are probably optimal.
we also obtain a separate lower bound providing evidence that the algorithm of papadimitriou
is close to optimal.
| 8 |
abstract. we prove that if γ is a lattice in the group of isometries of a symmetric space of non-compact type without euclidean
factors, then the virtual cohomological dimension of γ equals its
proper geometric dimension.
| 4 |
abstract. we introduce the fractal expansions, sequences of integers associated to a number. these
can be used to characterize the o-sequences. we generalize them by introducing numerical functions
called fractal functions. we classify the hilbert functions of bigraded algebras by using fractal functions.
| 0 |
abstract in this paper we introduce and analyse langevin samplers that consist of perturbations
of the standard underdamped langevin dynamics. the perturbed dynamics is such that its invariant
measure is the same as that of the unperturbed dynamics. we show that appropriate choices of the
perturbations can lead to samplers that have improved properties, at least in terms of reducing the
asymptotic variance. we present a detailed analysis of the new langevin sampler for gaussian target
distributions. our theoretical results are supported by numerical experiments with non-gaussian target
measures.
| 10 |
abstract—for homeland and transportation security applications, 2d x-ray explosive detection system (eds) have been
widely used, but they have limitations in recognizing 3d shape
of the hidden objects. among various types of 3d computed
tomography (ct) systems to address this issue, this paper is
interested in a stationary ct using fixed x-ray sources and
detectors. however, due to the limited number of projection
views, analytic reconstruction algorithms produce severe streaking artifacts. inspired by recent success of deep learning approach
for sparse view ct reconstruction, here we propose a novel image
and sinogram domain deep learning architecture for 3d reconstruction from very sparse view measurement. the algorithm
has been tested with the real data from a prototype 9-view dual
energy stationary ct eds carry-on baggage scanner developed
by gemss medical systems, korea, which confirms the superior
reconstruction performance over the existing approaches.
| 2 |
abstract
we consider the problem of nonparametric estimation of the drift of a continuously observed one-dimensional diffusion with periodic drift. motivated by computational considerations, van der meulen et al. (2014) defined a prior on the drift as a randomly truncated
and randomly scaled faber-schauder series expansion with gaussian coefficients. we study
the behaviour of the posterior obtained from this prior from a frequentist asymptotic point
of view. if the true data generating drift is smooth, it is proved that the posterior is adaptive
with posterior contraction rates for the l 2 -norm that are optimal up to a log factor. contraction rates in l p -norms with p ∈ (2, ∞] are derived as well.
| 10 |
abstract
in the standard setting of approachability there are two players and a target set. the
players play repeatedly a known vector-valued game where the first player wants to have
the average vector-valued payoff converge to the target set which the other player tries to
exclude it from this set. we revisit this setting in the spirit of online learning and do not
assume that the first player knows the game structure: she receives an arbitrary vectorvalued reward vector at every round. she wishes to approach the smallest (“best”) possible
set given the observed average payoffs in hindsight. this extension of the standard setting
has implications even when the original target set is not approachable and when it is not
obvious which expansion of it should be approached instead. we show that it is impossible,
in general, to approach the best target set in hindsight and propose achievable though
ambitious alternative goals. we further propose a concrete strategy to approach these goals.
our method does not require projection onto a target set and amounts to switching between
scalar regret minimization algorithms that are performed in episodes. applications to global
cost minimization and to approachability under sample path constraints are considered.
keywords: approachability, online learning, multi-objective optimization
| 10 |
abstract
we propose expected policy gradients (epg), which unify stochastic policy gradients (spg)
and deterministic policy gradients (dpg) for reinforcement learning. inspired by expected
sarsa, epg integrates (or sums) across actions when estimating the gradient, instead of
relying only on the action in the sampled trajectory. for continuous action spaces, we first
derive a practical result for gaussian policies and quadric critics and then extend it to
an analytical method for the universal case, covering a broad class of actors and critics,
including gaussian, exponential families, and reparameterised policies with bounded support.
for gaussian policies, we show that it is optimal to explore using covariance proportional
to eh , where h is the scaled hessian of the critic with respect to the actions. epg also
provides a general framework for reasoning about policy gradient methods, which we use to
establish a new general policy gradient theorem, of which the stochastic and deterministic
policy gradient theorems are special cases. furthermore, we prove that epg reduces the
variance of the gradient estimates without requiring deterministic policies and with little
computational overhead. finally, we show that epg outperforms existing approaches on
six challenging domains involving the simulated control of physical systems.
keywords: policy gradients, exploration, bounded actions, reinforcement learning, markov
decision process (mdp)
| 2 |
abstract
variational autoencoders (vaes) learn representations of data by jointly training a probabilistic
encoder and decoder network. typically these models encode all features of the data into a
single variable. here we are interested in learning disentangled representations that encode
distinct aspects of the data into separate variables. we propose to learn such representations
using model architectures that generalise from standard vaes, employing a general graphical
model structure in the encoder and decoder. this allows us to train partially-specified models
that make relatively strong assumptions about a subset of interpretable variables and rely on
the flexibility of neural networks to learn representations for the remaining variables. we
further define a general objective for semi-supervised learning in this model class, which can be
approximated using an importance sampling procedure. we evaluate our framework’s ability
to learn disentangled representations, both by qualitative exploration of its generative capacity,
and quantitative evaluation of its discriminative ability on a variety of models and datasets.
| 2 |
abstract
we study divided power structures on finitely generated k-algebras, where k is a field of positive characteristic p. as an application we show examples of 0-dimensional gorenstein k-schemes that do not lift
to a fixed noetherian local ring of non-equal characteristic. we also show that frobenius neighbourhoods
of a singular point of a general hypersurface of large dimension have no liftings to mildly ramified rings
of non-equal characteristic.
| 0 |
abstract
convolutional neural networks (cnns) are being applied to an increasing number of problems and fields due to their superior performance in classification and regression tasks. since two of the key
operations that cnns implement are convolution and pooling, this
type of networks is implicitly designed to act on data described by
regular structures such as images. motivated by the recent interest
in processing signals defined in irregular domains, we advocate a
cnn architecture that operates on signals supported on graphs. the
proposed design replaces the classical convolution not with a nodeinvariant graph filter (gf), which is the natural generalization of convolution to graph domains, but with a node-varying gf. this filter
extracts different local features without increasing the output dimension of each layer and, as a result, bypasses the need for a pooling
stage while involving only local operations. a second contribution
is to replace the node-varying gf with a hybrid node-varying gf,
which is a new type of gf introduced in this paper. while the alternative architecture can still be run locally without requiring a pooling
stage, the number of trainable parameters is smaller and can be rendered independent of the data dimension. tests are run on a synthetic
source localization problem and on the 20news dataset.
index terms— convolutional neural networks, network data,
graph signal processing, node-varying graph filters.
1. introduction
convolutional neural networks (cnns) have shown remarkable performance in a wide array of inference and reconstruction tasks [1],
in fields as diverse as pattern recognition, computer vision and
medicine [2–4]. the objective of cnns is to find a computationally
feasible architecture capable of reproducing the behavior of a certain unknown function. typically, cnns consist of a succession of
layers, each of which performs three simple operations – usually on
the output of the previous layer – and feed the result into the next
layer. these three operations are: 1) convolution, 2) application
of a nonlinearity, and 3) pooling or downsampling. because the
classical convolution and downsampling operations are defined for
regular (grid-based) domains, cnns have been applied to act on
data modeled by such a regular structure, like time or images.
however, an accurate description of modern datasets such as
those in social networks or genetics [5, 6] calls for more general
irregular structures. a framework that has been gaining traction to
tackle these problems is that of graph signal processing (gsp) [7–9].
gsp postulates that data can be modeled as a collection of values associated with the nodes of a graph, whose edges describe pairwise
relationships between the data. by exploiting the interplay between
the data and the graph, traditional signal processing concepts such
supported by usa nsf ccf 1717120 and aro w911nf1710438, and
spanish mineco tec2013-41604-r and tec2016-75361-r.
| 9 |
abstract
| 8 |
abstract
hash tables are ubiquitous in computer science for efficient access
to large datasets. however, there is always a need for approaches that
offer compact memory utilisation without substantial degradation of
lookup performance. cuckoo hashing is an efficient technique of creating hash tables with high space utilisation and offer a guaranteed
constant access time. we are given n locations and m items. each
item has to be placed in one of the k ≥ 2 locations chosen by k random
hash functions. by allowing more than one choice for a single item,
cuckoo hashing resembles multiple choice allocations schemes. in addition it supports dynamically changing the location of an item among
its possible locations. we propose and analyse an insertion algorithm
for cuckoo hashing that runs in linear time with high probability and
in expectation. previous work on total allocation time has analysed
breadth first search, and it was shown to be linear only in expectation.
our algorithm finds an assignment (with probability 1) whenever it exists. in contrast, the other known insertion method, known as random
walk insertion, may run indefinitely even for a solvable instance. we
also present experimental results comparing the performance of our
algorithm with the random walk method, also for the case when each
location can hold more than one item.
as a corollary we obtain a linear time algorithm (with high probability and in expectation) for finding perfect matchings in a special
class of sparse random bipartite graphs. we support this by performing
experiments on a real world large dataset for finding maximum matchings in general large bipartite graphs. we report an order of magnitude
improvement in the running time as compared to the hopkraft-karp
matching algorithm.
∗
| 8 |
abstract classical
theories. in particular, we review theories which did not have any algorithmic content in their general natural framework, such as galois theory,
the dedekind rings, the finitely generated projective modules or the krull
dimension.
constructive algebra is actually an old discipline, developed among others
by gauss and kronecker. we are in line with the modern “bible” on
the subject, which is the book by ray mines, fred richman and wim
ruitenburg, a course in constructive algebra, published in 1988. we will
cite it in abbreviated form [mrr].
this work corresponds to an msc graduate level, at least up to chapter xiv,
but only requires as prerequisites the basic notions concerning group theory,
linear algebra over fields, determinants, modules over commutative rings,
as well as the definition of quotient and localized rings. a familiarity with
polynomial rings, the arithmetic properties of z and euclidian rings is also
desirable.
finally, note that we consider the exercises and problems (a little over 320
in total) as an essential part of the book.
we will try to publish the maximum amount of missing solutions, as well
as additional exercises on the web page of one of the authors:
http://hlombardi.free.fr/publis/livresbrochures.html
–v–
| 0 |
abstract—automated decision making systems are increasingly
being used in real-world applications. in these systems for the
most part, the decision rules are derived by minimizing the
training error on the available historical data. therefore, if
there is a bias related to a sensitive attribute such as gender,
race, religion, etc. in the data, say, due to cultural/historical discriminatory practices against a certain demographic, the system
could continue discrimination in decisions by including the said
bias in its decision rule. we present an information theoretic
framework for designing fair predictors from data, which aim to
prevent discrimination against a specified sensitive attribute in a
supervised learning setting. we use equalized odds as the criterion
for discrimination, which demands that the prediction should be
independent of the protected attribute conditioned on the actual
label. to ensure fairness and generalization simultaneously, we
compress the data to an auxiliary variable, which is used for the
prediction task. this auxiliary variable is chosen such that it is
decontaminated from the discriminatory attribute in the sense
of equalized odds. the final predictor is obtained by applying a
bayesian decision rule to the auxiliary variable.
index terms—fairness, equalized odds, supervised learning.
| 7 |
abstract
this paper investigates the fundamental limits for detecting a high-dimensional sparse matrix
contaminated by white gaussian noise from both the statistical and computational perspectives.
we consider p×p matrices whose rows and columns are individually k-sparse. we provide a tight
characterization of the statistical and computational limits for sparse matrix detection, which
precisely describe when achieving optimal detection is easy, hard, or impossible, respectively.
although the sparse matrices considered in this paper have no apparent submatrix structure and
the corresponding estimation problem has no computational issue at all, the detection problem
has a surprising computational barrier when the sparsity level k exceeds the cubic root of the
matrix size p: attaining the optimal detection boundary is computationally at least as hard as
solving the planted clique problem.
the same statistical and computational limits also hold in the sparse covariance matrix
model, where each variable is correlated with at most k others. a key step in the construction
of the statistically optimal test is a structural property for sparse matrices, which can be of
independent interest.
| 7 |
abstract
person re-identification (reid) is an important task in
computer vision. recently, deep learning with a metric
learning loss has become a common framework for reid. in
this paper, we propose a new metric learning loss with hard
sample mining called margin smaple mining loss (msml)
which can achieve better accuracy compared with other
metric learning losses, such as triplet loss. in experiments,
our proposed methods outperforms most of the state-ofthe-art algorithms on market1501, mars, cuhk03 and
cuhk-sysu.
| 1 |
abstract
this paper introduces a novel activity dataset which exhibits real-life and diverse scenarios of complex, temporallyextended human activities and actions. the dataset presents a set of videos of actors performing everyday activities
in a natural and unscripted manner. the dataset was recorded using a static kinect 2 sensor which is commonly
used on many robotic platforms. the dataset comprises of rgb-d images, point cloud data, automatically generated
skeleton tracks in addition to crowdsourced annotations. furthermore, we also describe the methodology used to
acquire annotations through crowdsourcing. finally some activity recognition benchmarks are presented using current
state-of-the-art techniques. we believe that this dataset is particularly suitable as a testbed for activity recognition
research but it can also be applicable for other common tasks in robotics/computer vision research such as object
detection and human skeleton tracking.
keywords
activity dataset, crowdsourcing
| 1 |
abstract
the past few years have seen a surge of interest in the field of probabilistic logic learning
and statistical relational learning. in this endeavor, many probabilistic logics have been
developed. problog is a recent probabilistic extension of prolog motivated by the mining
of large biological networks. in problog, facts can be labeled with probabilities. these
facts are treated as mutually independent random variables that indicate whether these
facts belong to a randomly sampled program. different kinds of queries can be posed
to problog programs. we introduce algorithms that allow the efficient execution of these
queries, discuss their implementation on top of the yap-prolog system, and evaluate their
performance in the context of large networks of biological entities.
to appear in theory and practice of logic programming (tplp)
| 6 |
abstract: we consider a parameter estimation problem for one dimensional stochastic heat equations, when data is sampled discretely in time or spatial component. we establish some
general results on derivation of consistent and asymptotically normal estimators based
on computation of the p-variations of stochastic processes and their smooth perturbations. we apply these results to the considered spdes, by using some convenient
representations of the solutions. for some equations such results were ready available,
while for other classes of spdes we derived the needed representations along with their
statistical asymptotical properties. we prove that the real valued parameter next to
the laplacian, and the constant parameter in front of the noise (the volatility) can
be consistently estimated by observing the solution at a fixed time and on a discrete
spatial grid, or at a fixed space point and at discrete time instances of a finite interval,
assuming that the mesh-size goes to zero.
keywords: p-variation, statistics for spdes, discrete sampling, stochastic heat equation, inverse
problems for spdes, malliavin calculus.
msc2010: 60h15, 35q30, 65l09
| 10 |
abstract
interference arises when an individual’s potential outcome depends on the individual treatment level, but also on the treatment level of others. a common
assumption in the causal inference literature in the presence of interference is partial interference, implying that the population can be partitioned in clusters of
individuals whose potential outcomes only depend on the treatment of units within
the same cluster. previous literature has defined average potential outcomes under
counterfactual scenarios where treatments are randomly allocated to units within
a cluster. however, within clusters there may be units that are more or less likely
to receive treatment based on covariates or neighbors’ treatment. we define estimands that describe average potential outcomes for realistic counterfactual treatment allocation programs taking into consideration the units’ covariates, as well
as dependence between units’ treatment assignment. we discuss these estimands,
propose unbiased estimators and derive asymptotic results as the number of clusters
grows. finally, we estimate effects in a comparative effectiveness study of power
plant emission reduction technologies on ambient ozone pollution.
| 10 |
abstract
i introduce and analyse an anytime version of the optimally confident ucb
(ocucb) algorithm designed for minimising the cumulative regret in finitearmed stochastic bandits with subgaussian noise. the new algorithm is simple,
intuitive (in hindsight) and comes with the strongest finite-time regret guarantees
for a horizon-free algorithm so far. i also show a finite-time lower bound that
nearly matches the upper bound.
| 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.