text
stringlengths
62
2.94k
Ribosome flow model with different site sizes ; We introduce and analyze two general dynamical models for unidirectional movement of particles along a circular chain and an open chain of sites. The models include a soft version of the simple exclusion principle, that is, as the density in a site increases the effective entry rate into this site decreases. This allows to model and study the evolution of traffic jams of particles along the chain. A unique feature of these two new models is that each site along the chain can have a different size. Although the models are nonlinear, they are amenable to rigorous asymptotic analysis. In particular, we show that the dynamics always converges to a steadystate, and that the steadystate densities along the chain and the steadystate output flow rate from the chain can be derived from the spectral properties of a suitable matrix, thus eliminating the need to numerically simulate the dynamics until convergence. This spectral representation also allows for powerful sensitivity analysis, i.e. understanding how a change in one of the parameters in the models affects the steadystate. We show that the site sizes and the transition rates from site to site play different roles in the dynamics, and that for the purpose of maximizing the steadystate output or production rate the site sizes are more important than the transition rates. We also show that the problem of finding parameter values that maximize the production rate is tractable. We believe that the models introduced here can be applied to study various natural and artificial processes including ribosome flow during mRNA translation, the movement of molecular motors along filaments of the cytoskeleton, pedestrian and vehicular traffic, evacuation dynamics, and more.
From individualbased mechanical models of multicellular systems to freeboundary problems ; In this paper we present an individualbased mechanical model that describes the dynamics of two contiguous cell populations with different proliferative and mechanical characteristics. An offlattice modelling approach is considered whereby i every cell is identified by the position of its centre; ii mechanical interactions between cells are described via generic nonlinear force laws; and iii cell proliferation is contact inhibited. We formally show that the continuum counterpart of this discrete model is given by a freeboundary problem for the cell densities. The results of the derivation demonstrate how the parameters of continuum mechanical models of multicellular systems can be related to biophysical cell properties. We prove an existence result for the freeboundary problem and construct travellingwave solutions. Numerical simulations are performed in the case where the cellular interaction forces are described by the celebrated JohnsonKendallRoberts model of elastic contact, which has been previously used to model cellcell interactions. The results obtained indicate excellent agreement between the simulation results for the individualbased model, the numerical solutions of the corresponding freeboundary problem and the travellingwave analysis.
DataSpace Inversion with Ensemble Smoother ; Reservoir engineers use largescale numerical models to predict the production performance in oil and gas fields. However, these models are constructed based on scarce and often inaccurate data, making their predictions highly uncertain. On the other hand, measurements of pressure and flow rates are constantly collected during the operation of the field. The assimilation of these data into the reservoir models history matching helps to mitigate uncertainty and improve their predictive capacity. History matching is a nonlinear inverse problem, which is typically handled using optimization and Monte Carlo methods. In practice, however, generating a set of properly historymatched models that preserve the geological realism is very challenging, especially in cases with complicated prior description, such as models with fractures and complex facies distributions. Recently, a new dataspace inversion DSI approach was introduced in the literature as an alternative to the modelspace inversion used in history matching. The essential idea is to update directly the predictions from a prior ensemble of models to account for the observed production history without updating the corresponding models. The present paper introduces a DSI implementation based on the use of an iterative ensemble smoother and demonstrates with examples that the new implementation is computationally faster and more robust than the earlier method based on principal component analysis. The new DSI is also applied to estimate the production forecast in a real field with long production history and a large number of wells. For this field problem, the new DSI obtained forecasts comparable with a more traditional ensemblebased history matching.
BlackMarks Blackbox Multibit Watermarking for Deep Neural Networks ; Deep Neural Networks have created a paradigm shift in our ability to comprehend raw data in various important fields ranging from computer vision and natural language processing to intelligence warfare and healthcare. While DNNs are increasingly deployed either in a whitebox setting where the model internal is publicly known, or a blackbox setting where only the model outputs are known, a practical concern is protecting the models against Intellectual Property IP infringement. We propose BlackMarks, the first endtoend multibit watermarking framework that is applicable in the blackbox scenario. BlackMarks takes the pretrained unmarked model and the owner's binary signature as inputs and outputs the corresponding marked model with a set of watermark keys. To do so, BlackMarks first designs a modeldependent encoding scheme that maps all possible classes in the task to bit '0' and bit '1' by clustering the output activations into two groups. Given the owner's watermark signature a binary string, a set of key image and label pairs are designed using targeted adversarial attacks. The watermark WM is then embedded in the prediction behavior of the target DNN by finetuning the model with generated WM key set. To extract the WM, the remote model is queried by the WM key images and the owner's signature is decoded from the corresponding predictions according to the designed encoding scheme. We perform a comprehensive evaluation of BlackMarks's performance on MNIST, CIFAR10, ImageNet datasets and corroborate its effectiveness and robustness. BlackMarks preserves the functionality of the original DNN and incurs negligible WM embedding runtime overhead as low as 2.054.
Default Bayesian Model Selection of Constrained Multivariate Normal Linear Models ; The multivariate normal linear model is one of the most widely employed models for statistical inference in applied research. Special cases include multivariate t testing, MANCOVA, multivariate multiple regression, and repeated measures analysis. Statistical procedures for model selection where the models may have equality and order constraints on the model parameters of interest are limited however. This paper presents a default Bayes factor for this model selection problem. The default Bayes factor is based on generalized fractional Bayes methodology where different fractions are used for different observations and where the default prior is centered on the boundary of the constrained space under investigation. First, the method is fully automatic and therefore can be applied when prior information is weak or completely unavailable. Second, using group specific fractions, the same amount of information is used from each group resulting in a minimally informative default prior having a matrix Cauchy distribution, resulting in a consistent default Bayes factor. Third, numerical computation can be done using parallelization which makes it computationally cheap. Fourth, the evidence can be updated in a relatively simple manner when observing new data. Fifth, the selection criterion can be applied relatively straightforwardly in the presence of missing data that are missing at random. Applications for the social and behavioral sciences are used for illustration.
Slow Mixing of Glauber Dynamics for the SixVertex Model in the Ordered Phases ; The sixvertex model in statistical physics is a weighted generalization of the ice model on mathbbZ2 i.e., Eulerian orientations and the zerotemperature threestate Potts model i.e., proper threecolorings. The phase diagram of the model depicts its physical properties and suggests where local Markov chains will be efficient. In this paper, we analyze the mixing time of Glauber dynamics for the sixvertex model in the ordered phases. Specifically, we show that for all Boltzmann weights in the ferroelectric phase, there exist boundary conditions such that local Markov chains require exponential time to converge to equilibrium. This is the first rigorous result bounding the mixing time of Glauber dynamics in the ferroelectric phase. Our analysis demonstrates a fundamental connection between correlated random walks and the dynamics of intersecting lattice path models or routings. We analyze the Glauber dynamics for the sixvertex model with free boundary conditions in the antiferroelectric phase and significantly extend the region for which local Markov chains are known to be slow mixing. This result relies on a Peierls argument and novel properties of weighted nonbacktracking walks.
Discrete Optimization Methods for Group Model Selection in Compressed Sensing ; In this article we study the problem of signal recovery for group models. More precisely for a given set of groups, each containing a small subset of indices, and for given linear sketches of the true signal vector which is known to be groupsparse in the sense that its support is contained in the union of a small number of these groups, we study algorithms which successfully recover the true signal just by the knowledge of its linear sketches. We derive model projection complexity results and algorithms for more general group models than the stateoftheart. We consider two versions of the classical Iterative Hard Thresholding algorithm IHT. The classical version iteratively calculates the exact projection of a vector onto the group model, while the approximate version AMIHT uses a head and a tailapproximation iteratively. We apply both variants to group models and analyse the two cases where the sensing matrix is a Gaussian matrix and a model expander matrix. To solve the exact projection problem on the group model, which is known to be equivalent to the maximum weight coverage problem, we use discrete optimization methods based on dynamic programming and Benders' Decomposition. The head and tailapproximations are derived by a classical greedymethod and LProunding, respectively.
Towards Computational Models and Applications of Insect Visual Systems for Motion Perception A Review ; Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors LGMDs in locusts, the translation sensitive neural systems of direction selective neurons DSNs in fruit flies, bees and locusts, as well as the small target motion detectors STMDs in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bioinspired motion perception models.
The Ultraviolet and Infrared Behavior of an Abelian Proca Model From the Viewpoint of a OneParameter Extension of the Covariant Heisenberg Algebra ; Recently a oneparameter extension of the covariant Heisenberg algebra with the extension parameter l l is a nonnegative constant parameter which has a dimension of momentum1 in a D1dimensional Minkowski spacetime has been presented G. P. de Brito, P. I. C. Caneda, Y. M. P. Gomes, J. T. Guaitolini Junior and V. Nikoofard, Effective models of quantum gravity induced by Planck scale modifications in the covariant quantum algebra, Adv. High Energy Phys. 2017 2017 4768341. The Abelian Proca model is reformulated from the viewpoint of the above oneparameter extension of the covariant Heisenberg algebra. It is shown that the free space solutions of the above modified Proca model satisfy the modified dispersion relation fractextbfp2left1fracLambda22hbar2textbfp2right2m2c2 where Lambdahbar l is the characteristic length scale in our model. This modified dispersion relation describes two massive vector particles with the effective masses cal MpmLambdafrac2m1mpsqrt12leftfracmcLambdahbarright2. Numerical estimations show that the maximum value of Lambda in a fourdimensional spacetime is near to the electroweak length scale, i.e., Lambdamaxsim lelectroweaksim1018; m. We show that in the infraredlargedistance domain the modified Proca model behaves like an Abelian massive LeeWick model which has been presented by Accioly and his coworkers A. Accioly, J. HelayelNeto, G. Correia, G. Brito, J. de Almeida and W. Herdy, Interparticle potential energy for Ddimensional electromagnetic models from the corresponding scalar ones, Phys. Rev. D 93 2016 105042.
Bayesian cross validation for gravitationalwave searches in pulsartiming array data ; Gravitationalwave data analysis demands sophisticated statistical noise models in a bid to extract highly obscured signals from data. In Bayesian model comparison, we choose among a landscape of models by comparing their marginal likelihoods. However, this computation is numerically fraught and can be sensitive to arbitrary choices in the specification of parameter priors. In Bayesian cross validation, we characterize the fit and predictive power of a model by computing the Bayesian posterior of its parameters in a training dataset, and then use that posterior to compute the averaged likelihood of a different testing dataset. The resulting crossvalidation scores are straightforward to compute; they are insensitive to prior tuning; and they penalize unnecessarily complex models that overfit the training data at the expense of predictive performance. In this article, we discuss cross validation in the context of pulsartimingarray data analysis, and we exemplify its application to simulated pulsar data where it successfully selects the correct spectral index of a stochastic gravitationalwave background, and to a pulsar dataset from the NANOGrav 11year release where it convincingly favors a model that represents a transient feature in the interstellar medium. We argue that cross validation offers a promising alternative to Bayesian model comparison, and we discuss its use for gravitationalwave detection, by selecting or refuting models that include a gravitationalwave component.
Duality in a hyperbolic interaction model integrable even in a strong confinement Multisoliton solutions and field theory ; Models that remain integrable even in confining potentials are extremely rare and almost nonexistent. Here, we consider a onedimensional hyperbolic interaction model, which we call as the Hyperbolic Calogero HC model. This is classically integrable even in confining potentials which have boxlike shapes. We present a firstorder formulation of the HC model in an external confining potential. Using the rich property of duality, we find multisoliton solutions of this confined integrable model. Absence of solitons correspond to the equilibrium solution of the model. We demonstrate the dynamics of multisoliton solutions via bruteforce numerical simulations. We studied the physics of soliton collisions and quenches using numerical simulations. We have examined the motion of dual complex variables and found an analytic expression for the time period in a certain limit. We give the field theory description of this model and find the background solution absence of solitons analytically in the largeN limit. Analytical expressions of soliton solutions have been obtained in the absence of external confining potential. Our work is of importance to understand the general features of trapped interacting particles that remain classically integrable and can be of relevance to the collective behaviour of trapped cold atomic gases as well.
A Scalable Handwritten Text Recognition System ; Many studies on Offline Handwritten Text Recognition HTR systems have focused on building stateoftheart models for line recognition on small corpora. However, adding HTR capability to a large scale multilingual OCR system poses new challenges. This paper addresses three problems in building such systems data, efficiency, and integration. Firstly, one of the biggest challenges is obtaining sufficient amounts of high quality training data. We address the problem by using online handwriting data collected for a large scale production online handwriting recognition system. We describe our image data generation pipeline and study how online data can be used to build HTR models. We show that the data improve the models significantly under the condition where only a small number of real images is available, which is usually the case for HTR models. It enables us to support a new script at substantially lower cost. Secondly, we propose a line recognition model based on neural networks without recurrent connections. The model achieves a comparable accuracy with LSTMbased models while allowing for better parallelism in training and inference. Finally, we present a simple way to integrate HTR models into an OCR system. These constitute a solution to bring HTR capability into a large scale OCR system.
Stochastic Online Metric Matching ; We study the minimumcost metric perfect matching problem under online i.i.d arrivals. We are given a fixed metric with a server at each of the points, and then requests arrive online, each drawn independently from a known probability distribution over the points. Each request has to be matched to a free server, with cost equal to the distance. The goal is to minimize the expected total cost of the matching. Such stochastic arrival models have been widely studied for the maximization variants of the online matching problem; however, the only known result for the minimization problem is a tight Olog ncompetitiveness for the randomorder arrival model. This is in contrast with the adversarial model, where an optimal competitive ratio of Olog n has long been conjectured and remains a tantalizing open question. In this paper, we show improved results in the i.i.d arrival model. We show how the i.i.d model can be used to give substantially better algorithms our main result is an Olog log log n2competitive algorithm in this model. Along the way we give a 9competitive algorithm for the line and tree metrics. Both results imply a strict separation between the i.i.d model and the adversarial and random order models, both for general metrics and these muchstudied metrics.
ContinuousTime BirthDeath MCMC for Bayesian Regression Tree Models ; Decision trees are flexible models that are well suited for many statistical regression problems. In a Bayesian framework for regression trees, Markov Chain Monte Carlo MCMC search algorithms are required to generate samples of tree models according to their posterior probabilities. The critical component of such an MCMC algorithm is to construct good MetropolisHastings steps for updating the tree topology. However, such algorithms frequently suffering from local mode stickiness and poor mixing. As a result, the algorithms are slow to converge. Hitherto, authors have primarily used discretetime birthdeath mechanisms for Bayesian sums of regression tree models to explore the model space. These algorithms are efficient only if the acceptance rate is high which is not always the case. Here we overcome this issue by developing a new search algorithm which is based on a continuoustime birthdeath Markov process. This search algorithm explores the model space by jumping between parameter spaces corresponding to different tree structures. In the proposed algorithm, the moves between models are always accepted which can dramatically improve the convergence and mixing properties of the MCMC algorithm. We provide theoretical support of the algorithm for Bayesian regression tree models and demonstrate its performance.
Efficient single inputoutput layer spiking neural classifier with timevarying weight model ; This paper presents a supervised learning algorithm, namely, the Synaptic Efficacy Function with Metaneuron based learning algorithm SEFM for a spiking neural network with a timevarying weight model. For a given pattern, SEFM uses the learning algorithm derived from metaneuron based learning algorithm to determine the change in weights corresponding to each presynaptic spike times. The changes in weights modulate the amplitude of a Gaussian function centred at the same presynaptic spike times. The sum of amplitude modulated Gaussian functions represents the synaptic efficacy functions or timevarying weight models. The performance of SEFM is evaluated against stateoftheart spiking neural network learning algorithms on 10 benchmark datasets from UCI machine learning repository. Performance studies show superior generalization ability of SEFM. An ablation study on timevarying weight model is conducted using JAFFE dataset. The results of the ablation study indicate that using a timevarying weight model instead of single weight model improves the classification accuracy by 14. Thus, it can be inferred that a single inputoutput layer spiking neural network with timevarying weight model is computationally more efficient than a multilayer spiking neural network with longterm or shortterm weight model.
An AEC framework for fields with commuting automorphisms ; In this paper, we introduce an AEC framework for studying fields with commuting automorphisms. Fields with commuting automorphisms are closely related to difference fields. Some authors define a difference ring or field as a ring or field together with several commuting endomorphisms, while others only study one endomorphism. Z. Chatzidakis and E. Hrushovski have studied in depth the model theory of ACFA,the model companion of difference fields with one automorphism. Our fields with commuting automorphisms generalize this setting. We have several automorphisms and they are required to commute. Hrushovski has proved that in the case of fields with two or more commuting automorphisms,the existentially closed models do not necessarily form a first order model class. In the present paper, we introduce FCAclasses, an AEC framework for studying the existentially closed models of the theory of fields with commuting automorphisms.We prove that an FCAclass has AP and JEP and thus a monster model, that Galois types coincide with existential types in existentially closed models,that the class is homogeneous,and that there is a version of type amalgamation theorem that allows to combine three types under certain conditions. Finally, we use these results to show that our monster model is a simple homogeneous structure in the sense of S. Buechler and O. Lessman this is a nonelementary analogue for the classification theoretic notion of a simple first order theory.
A Model of Random Industrial SAT ; One of the most studied models of SAT is random SAT. In this model, instances are composed from clauses chosen uniformly randomly and independently of each other. This model may be unsatisfactory in that it fails to describe various features of SAT instances, arising in realworld applications. Various modifications have been suggested to define models of industrial SAT. Here, we focus mainly on the aspect of community structure. Namely, here the set of variables consists of a number of disjoint communities, and clauses tend to consist of variables from the same community. Thus, we suggest a model of random industrial SAT, in which the central generalization with respect to random SAT is the additional community structure. There has been a lot of work on the satisfiability threshold of random kSAT, starting with the calculation of the threshold of 2SAT, up to the recent result that the threshold exists for sufficiently large k. In this paper, we endeavor to study the satisfiability threshold for the proposed model of random industrial SAT. Our main result is that the threshold in this model tends to be smaller than its counterpart for random SAT. Moreover, under some conditions, this threshold even vanishes.
Understanding the transition from paroxysmal to persistent atrial fibrillation from microanatomical reentry in a simple model ; Atrial fibrillation AF is the most common cardiac arrhytmia, characterised by the chaotic motion of electrical wavefronts in the atria. In clinical practice, AF is classified under two primary categories paroxysmal AF, short intermittent episodes separated by periods of normal electrical activity, and persistent AF, longer uninterrupted episodes of chaotic electrical activity. However, the precise reasons why AF in a given patient is paroxysmal or persistent is poorly understood. Recently, we have introduced the percolation based ChristensenMananiPeters CMP model of AF which naturally exhibits both paroxysmal and persistent AF, but precisely how these differences emerge in the model is unclear. In this paper, we dissect the CMP model to identify the cause of these different AF classifications. Starting from a meanfield model where we describe AF as a simple birthdeath process, we add layers of complexity to the model and show that persistent AF arises from reentrant circuits which exhibit an asymmetry in their probability of activation relative to deactivation. As a result, different simulations generated at identical model parameters can exhibit fibrillatory episodes spanning several orders of magnitude from a few seconds to months. These findings demonstrate that diverse, complex fibrillatory dynamics can emerge from very simple dynamics in models of AF.
Progressive Transfer Learning ; Model finetuning is a widely used transfer learning approach in person Reidentification ReID applications, which finetuning a pretrained feature extraction model into the target scenario instead of training a model from scratch. It is challenging due to the significant variations inside the target scenario, e.g., different camera viewpoint, illumination changes, and occlusion. These variations result in a gap between the distribution of each minibatch and the whole dataset's distribution when using minibatch training. In this paper, we study model finetuning from the perspective of the aggregation and utilization of the global information of the dataset when using minibatch training. Specifically, we introduce a novel network structure called Batchrelated Convolutional Cell BConvCell, which progressively collects the global information of the dataset into a latent state and uses it to rectify the extracted feature. Based on BConvCells, we further proposed the Progressive Transfer Learning PTL method to facilitate the model finetuning process by jointly optimizing the BConvCells and the pretrained ReID model. Empirical experiments show that our proposal can improve the performance of the ReID model greatly on MSMT17, Market1501, CUHK03 and DukeMTMCreID datasets. Moreover, we extend our proposal to the general image classification task. The experiments in several image classification benchmark datasets demonstrate that our proposal can significantly improve the performance of baseline models. The code has been released at urlhttpsgithub.comZJULearningPTL
Thermodynamics of scalar field models with kinetic corrections ; In the present work, we compare the thermodynamical viability of two types of noncanonical scalar field models with kinetic corrections the square kinetic and square root kinetic corrections. In modern cosmology, the generalised second law of thermodynamics GSLT plays an important role in deciding thermodynamical compliance of a model as one cannot consider a model to be viable if it fails to respect GSLT. Hence, for comparing thermodynamical viability, we examine the validity of GSLT for these two models. For this purpose, by employing the Unified first law UFL, we calculate the total entropy of these two models in apparent and event horizons. The validity of GSLT is then examined from the autonomous systems as the original expressions of total entropy are very complicated. Although, at the background level, both models give interesting cosmological dynamics, however, thermodynamically we found that the square kinetic correction is more realistic as compared to the square root kinetic correction. More precisely, the GSLT holds for the square kinetic correction throughout the evolutionary history except only during the radiation epoch where the scalar field may not represent a true description of the matter content. On the other hand, the square root kinetic model fails to satisfy the GSLT in major cosmological eras.
Align, Mask and Select A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models ; The stateoftheart pretrained language representation models, such as Bidirectional Encoder Representations from Transformers BERT, rarely incorporate commonsense knowledge or other knowledge explicitly. We propose a pretraining approach for incorporating commonsense knowledge into language representation models. We construct a commonsenserelated multichoice question answering dataset for pretraining a neural language representation model. The dataset is created automatically by our proposed align, mask, and select AMS method. We also investigate different pretraining tasks. Experimental results demonstrate that pretraining models using the proposed approach followed by finetuning achieve significant improvements over previous stateoftheart models on two commonsenserelated benchmarks, including CommonsenseQA and Winograd Schema Challenge. We also observe that finetuned models after the proposed pretraining approach maintain comparable performance on other NLP tasks, such as sentence classification and natural language inference tasks, compared to the original BERT models. These results verify that the proposed approach, while significantly improving commonsenserelated NLP tasks, does not degrade the general language representation capabilities.
FFORMPP Featurebased forecast model performance prediction ; This paper introduces a novel metalearning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from the historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is wellknown that the performance of most metalearning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a featurebased time series simulation approach, namely GRATIS, in generating a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performances to other model selectioncombination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the metalearners and how the forecasting performances are affected by various factors.
War pact model of shrinking networks ; Many real systems can be described by a set of interacting entities forming a complex network. To some surprise, these have been shown to share a number of structural properties regardless of their type or origin. It is thus of vital importance to design simple and intuitive models that can explain their intrinsic structure and dynamics. These can, for instance, be used to study networks analytically or to construct networks not observed in real life. Most models proposed in the literature are of two types. A model can be either static, where edges are added between a fixed set of nodes according to some predefined rule, or evolving, where the number of nodes or edges increases over time. However, some real networks do not grow but rather shrink, meaning that the number of nodes or edges decreases over time. We here propose a simple model of shrinking networks called the war pact model. We show that networks generated in such a way exhibit common structural properties of real networks. Furthermore, compared to classical models, these resemble international trade, correlates of war, Bitcoin transactions and other networks more closely. Network shrinking may therefore represent a reasonable explanation of the evolution of some networks and greater emphasis should be put on such models in the future.
Demand Forecasting in the Presence of Systematic Events Cases in Capturing Sales Promotions ; Reliable demand forecasts are critical for the effective supply chain management. Several endogenous and exogenous variables can influence the dynamics of demand, and hence a single statistical model that only consists of historical sales data is often insufficient to produce accurate forecasts. In practice, the forecasts generated by baseline statistical models are often judgmentally adjusted by forecasters to incorporate factors and information that are not incorporated in the baseline models. There are however systematic events whose effect can be effectively quantified and modeled to help minimize human intervention in adjusting the baseline forecasts. In this paper, we develop and test a novel regimeswitching approach to quantify systematic informationevents and objectively incorporate them into the baseline statistical model. Our simple yet practical and effective model can help limit forecast adjustments to only focus on the impact of less systematic events such as sudden climate change or dynamic market activities. The proposed model and approach is validated empirically using sales and promotional data from two Australian companies. Discussions focus on a thorough analysis of the forecasting and benchmarking results. Our analysis indicates that the proposed model can successfully improve the forecast accuracy when compared to the current industry practice which heavily relies on human judgment to factor in all types of informationevents.
EndtoEnd Bias Mitigation by Modelling Biases in Corpora ; Several recent studies have shown that strong natural language understanding NLU models are prone to relying on unwanted dataset biases without learning the underlying task, resulting in models that fail to generalize to outofdomain datasets and are likely to perform poorly in realworld scenarios. We propose two learning strategies to train neural models, which are more robust to such biases and transfer better to outofdomain datasets. The biases are specified in terms of one or more biasonly models, which learn to leverage the dataset biases. During training, the biasonly models' predictions are used to adjust the loss of the base model to reduce its reliance on biases by downweighting the biased examples and focusing the training on the hard examples. We experiment on largescale natural language inference and fact verification benchmarks, evaluating on outofdomain datasets that are specifically designed to assess the robustness of models against known biases in the training data. Results show that our debiasing methods greatly improve robustness in all settings and better transfer to other textual entailment datasets. Our code and data are publicly available in urlhttpsgithub.comrabeehkrobustnli.
DistilBERT, a distilled version of BERT smaller, faster, cheaper and lighter ; As Transfer Learning from largescale pretrained models becomes more prevalent in Natural Language Processing NLP, operating these large models in ontheedge andor under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pretrain a smaller generalpurpose language representation model, called DistilBERT, which can then be finetuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building taskspecific models, we leverage knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by 40, while retaining 97 of its language understanding capabilities and being 60 faster. To leverage the inductive biases learned by larger models during pretraining, we introduce a triple loss combining language modeling, distillation and cosinedistance losses. Our smaller, faster and lighter model is cheaper to pretrain and we demonstrate its capabilities for ondevice computations in a proofofconcept experiment and a comparative ondevice study.
Flows Over Periodic Hills of Parameterized Geometries A Dataset for DataDriven Turbulence Modeling From Direct Simulations ; Computational fluid dynamics models based on Reynoldsaveraged NavierStokes equations with turbulence closures still play important roles in engineering design and analysis. However, the development of turbulence models has been stagnant for decades. With recent advances in machine learning, datadriven turbulence models have become attractive alternatives worth further explorations. However, a major obstacle in the development of datadriven turbulence models is the lack of training data. In this work, we survey currently available public turbulent flow databases and conclude that they are inadequate for developing and validating datadriven models. Rather, we need more benchmark data from systematically and continuously varied flow conditions e.g., Reynolds number and geometry with maximum coverage in the parameter space for this purpose. To this end, we perform direct numerical simulations of flows over periodic hills with varying slopes, resulting in a family of flows over periodic hills which ranges from incipient to mild and massive separations. We further demonstrate the use of such a dataset by training a machine learning model that predicts Reynolds stress anisotropy based on a set of mean flow features. We expect the generated dataset, along with its design methodology and the example application presented herein, will facilitate development and comparison of future datadriven turbulence models.
Bubbles in Turbulent Flows Datadriven, kinematic models with memory terms ; We present data driven kinematic models for the motion of bubbles in highRe turbulent fluid flows based on recurrent neural networks with longshort term memory enhancements. The models extend empirical relations, such as MaxeyRiley MR and its variants, whose applicability is limited when either the bubble size is large or the flow is very complex. The recurrent neural networks are trained on the trajectories of bubbles obtained by Direct Numerical Simulations DNS of the Navier Stokes equations for a twocomponent incompressible flow model. Long short term memory components exploit the time history of the flow field that the bubbles have encountered along their trajectories and the networks are further augmented by imposing rotational invariance to their structure. We first train and validate the formulated model using DNS data for a turbulent TaylorGreen vortex. Then we examine the model predictive capabilities and its generalization to Reynolds numbers that are different from those of the training data on benchmark problems, including a steady Hill's spherical vortex and an unsteady Gaussian vortex ring flow field. We find that the predictions of the developed model are significantly improved compared with those obtained by the MR equation. Our results indicate that datadriven models with history terms are well suited in capturing the trajectories of bubbles in turbulent flows.
Partial Separability and Functional Graphical Models for Multivariate Gaussian Processes ; The covariance structure of multivariate functional data can be highly complex, especially if the multivariate dimension is large, making extensions of statistical methods for standard multivariate data to the functional data setting challenging. For example, Gaussian graphical models have recently been extended to the setting of multivariate functional data by applying multivariate methods to the coefficients of truncated basis expansions. However, a key difficulty compared to multivariate data is that the covariance operator is compact, and thus not invertible. The methodology in this paper addresses the general problem of covariance modeling for multivariate functional data, and functional Gaussian graphical models in particular. As a first step, a new notion of separability for the covariance operator of multivariate functional data is proposed, termed partial separability, leading to a novel KarhunenLoevetype expansion for such data. Next, the partial separability structure is shown to be particularly useful in order to provide a welldefined functional Gaussian graphical model that can be identified with a sequence of finitedimensional graphical models, each of identical fixed dimension. This motivates a simple and efficient estimation procedure through application of the joint graphical lasso. Empirical performance of the method for graphical model estimation is assessed through simulation and analysis of functional brain connectivity during a motor task. Empirical performance of the method for graphical model estimation is assessed through simulation and analysis of functional brain connectivity during a motor task.
A preference learning framework for multiple criteria sorting with diverse additive value models and valued assignment examples ; We present a preference learning framework for multiple criteria sorting. We consider sorting procedures applying an additive value model with diverse types of marginal value functions including linear, piecewiselinear, splined, and general monotone ones under a unified analytical framework. Differently from the existing sorting methods that infer a preference model from crisp decision examples, where each reference alternative is assigned to a unique class, our framework allows to consider valued assignment examples in which a reference alternative can be classified into multiple classes with respective credibility degrees. We propose an optimization model for constructing a preference model from such valued examples by maximizing the credible consistency among reference alternatives. To improve the predictive ability of the constructed model on new instances, we employ the regularization techniques. Moreover, to enhance the capability of addressing largescale datasets, we introduce a stateoftheart algorithm that is widely used in the machine learning community to solve the proposed optimization model in a computationally efficient way. Using the constructed additive value model, we determine both crisp and valued assignments for nonreference alternatives. Moreover, we allow the Decision Maker to prioritize importance of classes and give the method a flexibility to adjust classification performance across classes according to the specified priorities. The practical usefulness of the analytical framework is demonstrated on a realworld dataset by comparing it to several existing sorting methods.
SpeechBased Parameter Estimation of an Asymmetric Vocal Fold Oscillation Model and Its Application in Discriminating Vocal Fold Pathologies ; So far, several physical models have been proposed for the study of vocal fold oscillations during phonation. The parameters of these models, such as vocal fold elasticity, resistance, etc. are traditionally determined through the observation and measurement of the vocal fold vibrations in the larynx. Since such direct measurements tend to be the most accurate, the traditional practice has been to set the parameter values of these models based on measurements that are averaged across an ensemble of human subjects. However, the direct measurement process is hard to revise outside of clinical settings. In many cases, especially in pathological ones, the properties of the vocal folds often deviate from their generic valuessometimes asymmetrically wherein the characteristics of the two vocal folds differ for the same individual. In such cases, it is desirable to find a more scalable way to adjust the model parameters on a case by case basis. In this paper, we present a novel and alternate way to determine vocal fold model parameters from the speech signal. We focus on an asymmetric model and show that for such models, differences in estimated parameters can be successfully used to discriminate between voices that are characteristic of different underlying vocal fold pathologies.
Flexible Bayesian modelling in dichotomous item response theory using mixtures of skewed item curves ; Most Item Response Theory IRT models for dichotomous responses are based on probit or logit link functions which assume a symmetric relationship between the probability of a correct response and the latent traits of individuals submitted to a test. This assumption restricts the use of those models to the case in which all items have a symmetric behaviour. On the other hand, asymmetric models proposed in the literature impose that all the items in a test have an asymmetric behaviour. This assumption is inappropriate for great part of the tests which are, in general, composed by both symmetric and asymmetric items. Furthermore, a straightforward extension of the existing models in the literature would require a prior selection of the items' symmetryasymmetry status. This paper proposes a Bayesian IRT model that accounts for symmetric and asymmetric items in a flexible though parsimonious way. That is achieved by assigning a finite mixture prior to the skewness parameter, with one of the mixture components being a pointmass at zero. This allows for analyses under both model selection and model averaging approaches. Asymmetric item curves are designed through the centred skew normal distribution, which has a particularly appealing parametrisation in terms of parameter interpretation and computational efficiency. An efficient MCMC algorithm is proposed to perform Bayesian inference and its performance is investigated in some simulated examples. Finally, the proposed methodology is applied to a data set from a large scale educational exam in Brazil.
An Analytical Lidar Sensor Model Based on Ray Path Information ; Two core competencies of a mobile robot are to build a map of the environment and to estimate its own pose on the basis of this map and incoming sensor readings. To account for the uncertainties in this process, one typically employs probabilistic state estimation approaches combined with a model of the specific sensor. Over the past years, lidar sensors have become a popular choice for mapping and localization. However, many common lidar models perform poorly in unstructured, unpredictable environments, they lack a consistent physical model for both mapping and localization, and they do not exploit all the information the sensor provides, e.g. outofrange measurements. In this paper, we introduce a consistent physical model that can be applied to mapping as well as to localization. It naturally deals with unstructured environments and makes use of both outofrange measurements and information about the ray path. The approach can be seen as a generalization of the wellestablished reflection model, but in addition to counting ray reflections and traversals in a specific map cell, it considers the distances that all rays travel inside this cell. We prove that the resulting map maximizes the data likelihood and demonstrate that our model outperforms stateoftheart sensor models in extensive realworld experiments.
Comparison of Deep Reinforcement Learning and Model Predictive Control for Adaptive Cruise Control ; This study compares Deep Reinforcement Learning DRL and Model Predictive Control MPC for Adaptive Cruise Control ACC design in carfollowing scenarios. A firstorder system is used as the ControlOriented Model COM to approximate the acceleration command dynamics of a vehicle. Based on the equations of the control system and the multiobjective cost function, we train a DRL policy using Deep Deterministic Policy Gradient DDPG and solve the MPC problem via InteriorPoint Optimization IPO. Simulation results for the episode costs show that, when there are no modeling errors and the testing inputs are within the training data range, the DRL solution is equivalent to MPC with a sufficiently long prediction horizon. Particularly, the DRL episode cost is only 5.8 higher than the benchmark solution provided by optimizing the entire episode via IPO. The DRL control performance degrades when the testing inputs are outside the training data range, indicating inadequate generalization. When there are modeling errors due to control delays, disturbances, andor testing with a HighFidelity Model HFM of the vehicle, the DRLtrained policy performs better with large modeling errors while having similar performance as MPC when the modeling errors are small.
Modelling of the turbulent burning velocity based on Lagrangian statistics of propagating surfaces ; We propose a predictive model of the turbulent burning velocity ST in homogeneous isotropic turbulence HIT based on Lagrangian statistics of propagating surfaces. The propagating surfaces with a constant displacement speed are initially arranged on a plane, and they evolve in nonreacting HIT, behaving like the propagation of a planar premixed flame front. The universal constants in the model of ST characterize the enhancement of area growth of premixed flames by turbulence, and they are determined by Lagrangian statistics of propagating surfaces. The flame area is then modelled by the area of propagating surfaces at a truncation time. This truncation time signals the statistical stationary state of the evolutionary geometry of propagating surfaces, and it is modelled by an explicit expression using limiting conditions of very weak and strong turbulence. Another parameter in the model of ST characterizes the effect of fuel chemistry on ST, and it is predetermined by very few available data points of ST from experiments or direct numerical simulation DNS in weak turbulence. The proposed model is validated using three DNS series of turbulent premixed flames with various fuels. The model prediction of ST generally agrees well with DNS in a wide range of premixed combustion regimes, and it captures the basic trends of ST in terms of the turbulence intensity, including the linear growth in weak turbulence and the bending effect' in strong turbulence.
Hierarchical Bayesian Model for Probabilistic Analysis of Electric Vehicle Battery Degradation ; This paper proposes a hierarchical Bayesian model for probabilistic estimation of the electric vehicle battery capacity fade. Since the battery aging factors such as temperature, current, and state of charge are not fixed, and they change in different times, locations and by the different users, deterministic models with constant parameters cannot accurately evaluate the battery capacity fade. Therefore, a probabilistic presentation of the capacity fade including uncertainties of the measurements or observations of the variables can be a proper solution. We have developed a hierarchical Bayesian Network model for the electric vehicle battery capacity fade considering multiple external variables. The mathematical expression of the model is extracted based on Bayes theorem, the probability distributions for all variables and their dependencies are carefully chosen where the Metropolis Hastings Markov Chain Monte Carlo sampling method is applied to generate the posterior distributions. The model is trained with 85 percent of experimental data to obtain its unseen parameters and tested with other 15 percent of data to prove its accuracy. Also, three case studies for different drivers, different grid services frequencies, and different climates are explored to show model flexibility with different input data. The developed model needs training data for parameter tuning in different conditions. However, after training, it has more than 95 percent precision in estimating the battery capacity fade percentage.
Understanding Knowledge Distillation in Nonautoregressive Machine Translation ; Nonautoregressive machine translation NAT systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models. Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained autoregressive model for better performance. Knowledge distillation is empirically useful, leading to large gains in accuracy for NAT models, but the reason for this success has, as of yet, been unclear. In this paper, we first design systematic experiments to investigate why knowledge distillation is crucial to NAT training. We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data. Furthermore, a strong correlation is observed between the capacity of an NAT model and the optimal complexity of the distilled data for the best translation quality. Based on these findings, we further propose several approaches that can alter the complexity of data sets to improve the performance of NAT models. We achieve the stateoftheart performance for the NATbased models, and close the gap with the autoregressive baseline on WMT14 EnDe benchmark.
Extending and Calibrating the Velocity dependent OneScale model for Cosmic Strings with One Thousand Field Theory Simulations ; Understanding the evolution and cosmological consequences of topological defect networks requires a combination of analytic modeling and numerical simulations. The canonical analytic model for defect network evolution is the Velocitydependent OneScale VOS model. For the case of cosmic strings, this has so far been calibrated using small numbers of GotoNambu and field theory simulations, in the radiation and matter eras, as well as in Minkowski spacetime. But the model is only as good as the available simulations, and it should be extended as further simulations become available. In previous work we presented a General Purpose Graphics Processing Unit implementation of the evolution of cosmological domain wall networks, and used it to obtain an improved VOS model for domain walls. Here we continue this effort, exploiting a more recent analogous code for local AbelianHiggs string networks. The significant gains in speed afforded by this code enabled us to carry out 1032 field theory simulations of 5123 size, with 43 different expansion rates. This detailed exploration of the effects of the expansion rate on the network properties in turn enables a statistical separation of various dynamical processes affecting the evolution of the network. We thus extend and accurately calibrate the VOS model for cosmic strings, including separate terms for energy losses due to loop production and scalargauge radiation. By comparing this newly calibrated VOS model with the analogous one for domain walls we quantitatively show that energy loss mechanisms are different for the two types of defects.
Methods for Stabilizing Models across Large Samples of Projects with case studies on Predicting Defect and Project Health ; Despite decades of research, SE lacks widely accepted models that offer precise quantitative stable predictions about what factors most influence software quality. This paper provides a promising result showing such stable models can be generated using a new transfer learning framework called STABILIZER. Given a tree of recursively clustered projects using project metadata, STABILIZER promotes a model upwards if it performs best in the lower clusters stopping when the promoted model performs worse than the models seen at a lower level. The number of models found by STABILIZER is minimal one for defect prediction 756 projects and less than a dozen for project health 1628 projects. Hence, via STABILIZER, it is possible to find a few projects which can be used for transfer learning and make conclusions that hold across hundreds of projects at a time. Further, the models produced in this manner offer predictions that perform as well or better than the prior stateoftheart. To the best of our knowledge, STABILIZER is order of magnitude faster than the prior stateoftheart transfer learners which seek to find conclusion stability, and these case studies are the largest demonstration of the generalizability of quantitative predictions of project quality yet reported in the SE literature. In order to support open science, all our scripts and data are online at httpsgithub.comAnonymous633671STABILIZER.
On the Cosmological Models with Matter Creation ; The matter creation model of PrigogineG'eh'eniauGunzigNardone is revisited in terms of a redefined creation pressure which does not lead to irreversible adiabatic evolution at constant specific entropy. With the resulting freedom to choose a particular gas process, a flat FRWL cosmological model is proposed based on three input characteristics i a perfect fluid comprising of an ideal gas, ii a quasiadiabatic polytropic process, and iii a particular rate of particle creation. Such model leads to the description of the latetime acceleration of the expanding Universe with a natural transition from decelerating to accelerating regime. Only the Friedmann equations and the laws of thermodynamics are used and no assumptions of dark energy component is made. The model also allows the explicit determination as functions of time of all variables, including the entropy, the nonconserved specific entropy and the time the accelerating phase begins. A form of correspondence with the dark energy models quintessence, in particular is established via the Om diagnostics. Parallels with the concordance cosmological LambdaCDM model for the matterdominated epoch and the present epoch of accelerated expansion are also established via slight modifications of both models.
Assessing the global and local uncertainty in scientific evidence in the presence of model misspecification ; Scientists need to compare the support for models based on observed phenomena. The main goal of the evidential paradigm is to quantify the strength of evidence in the data for a reference model relative to an alternative model. This is done via an evidence function, such as Delta SIC, an estimator of the sample size scaled difference of divergences between the generating mechanism and the competing models. To use evidence, either for decision making or as a guide to the accumulation of knowledge, an understanding of the uncertainty in the evidence is needed. This uncertainty is well characterized by the standard statistical theory of estimation. Unfortunately, the standard theory breaks down if the models are misspecified, as it is normally the case in scientific studies. We develop nonparametric bootstrap methodologies for estimating the sampling distribution of the evidence estimator under model misspecification. This sampling distribution allows us to determine how secure we are in our evidential statement. We characterize this uncertainty in the strength of evidence with two different types of confidence intervals, which we term global and local. We discuss how evidence uncertainty can be used to improve scientific inference and illustrate this with a reanalysis of the model identification problem in a prominent landscape ecology study Grace and Keeley, 2006 using structural equations.
Adaptive Frequencylimited H2Model Order Reduction ; In this paper, we present an adaptive framework for constructing a pseudooptimal reduced model for the frequencylimited H2optimal model order reduction problem. We show that the frequencylimited pseudooptimal reducedorder model has an inherent property of monotonic decay in error if the interpolation points and tangential directions are selected appropriately. We also show that this property can be used to make an automatic selection of the order of the reduced model for an allowable tolerance in error. The proposed algorithm adaptively increases the order of the reduced model such that the frequencylimited H2norm error decays monotonically irrespective of the choice of interpolation points and tangential directions. The stability of the reducedorder model is also guaranteed. Additionally, it also generates the approximations of the frequencylimited system Gramians that monotonically approach the original solution. Further, we show that the lowrank alternating direction implicit iteration method for solving largescale frequencylimited Lyapunov equations implicitly performs frequencylimited pseudooptimal model order reduction. We consider two numerical examples to validate the theory presented in the paper.
Macross Urban Dynamics Modeling based on Metapath Guided CrossModal Embedding ; As the ongoing rapid urbanization takes place with an everincreasing speed, fully modeling urban dynamics becomes more and more challenging, but also a necessity for socioeconomic development. It is challenging because human activities and constructions are ubiquitous; urban landscape and life content change anywhere and anytime. It's crucial due to the fact that only uptodate urban dynamics can enable governors to optimize their city planning strategy and help individuals organize their daily lives in a more efficient way. Previous geographic topic model based methods attempt to solve this problem but suffer from high computational cost and memory consumption, limiting their scalability to city level applications. Also, strong prior assumptions make such models fail to capture certain patterns by nature. To bridge the gap, we propose Macross, a metapath guided embedding approach to jointly model location, time and text information. Given a dataset of geotagged social media posts, we extract and aggregate location and time and construct a heterogeneous information network using the aggregated space and time. Metapath2vec based approach is used to construct vector representations for times, locations and frequent words such that cooccurrence pairs of nodes are closer in latent space. The vector representations will be used to infer related time, locations or keywords for a user query. Experiments done on enormous datasets show our model can generate comparable if not better quality query results compared to state of the art models and outperform some cuttingedge models for activity recovery and classification.
Topology optimization of heat sinks for instantaneous chip cooling using a transient pseudo3D thermofluid model ; With the increasing power density of electronics components, the heat dissipation capacity of heat sinks gradually becomes a bottleneck. Many structural optimization methods, including topology optimization, have been widely used for heat sinks. Due to its high design freedom, topology optimization is suggested for the design of heat sinks using a transient pseudo3D thermofluid model to acquire better instantaneous thermal performance. The pseudo3D model is designed to reduce the computational cost and maintain an acceptable accuracy. The model relies on an artificial heat convection coefficient to couple two layers and establish the approximate relationship with the corresponding 3D model. In the model, a constant pressure drop and heat generation rate are treated. The material distribution is optimized to reduce the average temperature of the base plate at the prescribed terminal time. Furthermore, to reduce the intermediate density regions during the densitybased topology optimization procedure, a detailed analysis of interpolation functions is made and the penalty factors are chosen on this basis. Finally, considering the engineering application of the model, a practical model with more powerful cooling medium and higher inlet pressure is built. The optimized design shows a better instantaneous thermal performance and provides 66.7 of the pumping power reduction compared with reference design.
A Multigrid Method for Efficiently Training Video Models ; Training competitive deep video models is an order of magnitude slower than training their counterpart image models. Slow training causes long research cycles, which hinders progress in video understanding research. Following standard practice for training image models, video model training assumes a fixed minibatch shape a specific number of clips, frames, and spatial size. However, what is the optimal shape High resolution models perform well, but train slowly. Low resolution models train faster, but they are inaccurate. Inspired by multigrid methods in numerical optimization, we propose to use variable minibatch shapes with different spatialtemporal resolutions that are varied according to a schedule. The different shapes arise from resampling the training data on multiple sampling grids. Training is accelerated by scaling up the minibatch size and learning rate when shrinking the other dimensions. We empirically demonstrate a general and robust grid schedule that yields a significant outofthebox training speedup without a loss in accuracy for different models I3D, nonlocal, SlowFast, datasets Kinetics, SomethingSomething, Charades, and training settings with and without pretraining, 128 GPUs or 1 GPU. As an illustrative example, the proposed multigrid method trains a ResNet50 SlowFast network 4.5x faster wallclock time, same hardware while also improving accuracy 0.8 absolute on Kinetics400 compared to the baseline training method. Code is available online.
A biintegrated model for coupling lotsizing and cuttingstock problems ; In this paper, a framework that addresses the core of the papermaking process is proposed, starting from the production of jumbos and ending with the paper sheets used in daily life. The first phase of the process is modelled according to a lotsizing problem, where the quantities of jumbos are determined in order to meet the demand of the entire chain. The second phase follows a onedimensional cuttingstock formulation, where these jumbos are cut into smaller reels of predetermined lengths. Some of these are intended to fulfil a portfolio of orders, while others are used as raw material for the third phase of the process, when the reels are cut into sheets with specific dimensions and demands, following a twodimensional cuttingstock problem. The model is called the BiIntegrated Model, since it is composed of two integrated models. The heuristic method developed uses the Simplex Method with column generation for the two cuttingstock phases and applies the RelaxandFix technique to obtain the roundedinteger solution. Computational experiments comparing the solutions of the BiIntegrated Model to other strategies of modelling the production process indicate average cost gains reaching 26.63. Additional analyses of the model behaviour under several situations resulted in remarkable findings.
Daily Data Assimilation of a Hydrologic Model Using the Ensemble Kalman Filter ; Accurate runoff forecasting is crucial for reservoir operators as it allows optimized water management, flood control and hydropower generation. Land surface models in mountainous regions depend on climatic inputs such as precipitation, temperature and solar radiation to model the water and energy dynamics and produce runoff as output. With the rapid development of cheap electronics applied in various systems, such as Wireless Sensor Networks WSNs, satellite and airborne technologies, the prospect of practically measuring spatial Snow Water Equivalent in a dense temporal scale is increasing. We present a framework for updating the Precipitation Runoff Modeling System PRMS with Snow Water Equivalent SWE maps and runoff measurements on a daily timescale based on the Ensemble Kalman Filter ENKF. Results show that by assimilating SWE daily, the modeled SWE gets updated accordingly, however no improvement is observed at the runoff model output. Instead, a deterioration consistently occurs. Augmenting the state space with model parameters and runoff model output allows for filter update with previous day measured runoff using the joint stateparameter method, and showed a considerable improvement in the daily runoff output of up to 60 reduction in RMSE for the wet water year 2011 relative to the no assimilation scenario, and improvement of up to 28 compared to a naive autoregressive AR1 filter. Additional simulation years showed consistent improvement compared to no assimilation, but varied relative to the previous day autoregressive forecast during the dry year 2014.
Think Locally, Act Globally Federated Learning with Local and Global Representations ; Federated learning is a method of training models on private data distributed over multiple devices. To keep device data private, the global model is trained by only communicating parameters and updates which poses scalability challenges for large models. To this end, we propose a new federated learning algorithm that jointly learns compact local representations on each device and a global model across all devices. As a result, the global model can be smaller since it only operates on local representations, reducing the number of communicated parameters. Theoretically, we provide a generalization analysis which shows that a combination of local and global models reduces both variance in the data as well as variance across device distributions. Empirically, we demonstrate that local models enable communicationefficient training while retaining performance. We also evaluate on the task of personalized mood prediction from realworld mobile data where privacy is key. Finally, local models handle heterogeneous data from new devices, and learn fair representations that obfuscate protected attributes such as race, age, and gender.
Unifying Deep Local and Global Features for Image Search ; Image retrieval is the problem of searching an image database for items that are similar to a query image. To address this task, two main types of image representations have been studied global and local image features. In this work, our key contribution is to unify global and local features into a single deep model, enabling accurate retrieval with efficient feature extraction. We refer to the new model as DELG, standing for DEep Local and Global features. We leverage lessons from recent feature learning work and propose a model that combines generalized mean pooling for global features and attentive selection for local features. The entire network can be learned endtoend by carefully balancing the gradient flow between two heads requiring only imagelevel labels. We also introduce an autoencoderbased dimensionality reduction technique for local features, which is integrated into the model, improving training efficiency and matching performance. Comprehensive experiments show that our model achieves stateoftheart image retrieval on the Revisited Oxford and Paris datasets, and stateoftheart singlemodel instancelevel recognition on the Google Landmarks dataset v2. Code and models are available at httpsgithub.comtensorflowmodelstreemasterresearchdelf .
From constant to variable density inverse extended Born modelling ; For quantitative seismic imaging, iterative leastsquares reverse time migration is the recommended approach. The existence of an inverse of the forward modelling operator would considerably reduce the number of required iterations. In the context of the extended model, such a pseudoinverse exists, built as a weighted version of the adjoint and accounts for the deconvolution, geometrical spreading and uneven illumination. The application of the pseudoinverse Born modelling is based on constant density acoustic media, which is a limiting factor for practical applications. To consider density perturbation, we propose and investigate two approaches. The first one is a generalization of a recent study proposing to recover acoustic perturbations from angledependent response of the pseudoinverse Born modelling operator. The new version is based on weighted leastsquares objective function. The method not only provides more robust results, but also offers the flexibility to include constrains in the objective function in order to reduce the parameters crosstalk. We also propose an alternative approach based on Taylor expansion that does not require any Radon transform. Numerical examples based on simple and the Marmousi2 models using correct and incorrect background models for the variable density Born modelling, verify the effectiveness of the weighted leastsquares method when compared with the other two approaches. The Taylor expansion approach appears to contain too many artifacts for a successful applicability.
Expected Information Maximization Using the IProjection for Mixture Density Estimation ; Modelling highly multimodal data is a challenging problem in machine learning. Most algorithms are based on maximizing the likelihood, which corresponds to the Momentprojection of the data distribution to the model distribution. The Mprojection forces the model to average over modes it cannot represent. In contrast, the Iinformationprojection ignores such modes in the data and concentrates on the modes the model can represent. Such behavior is appealing whenever we deal with highly multimodal data where modelling single modes correctly is more important than covering all the modes. Despite this advantage, the Iprojection is rarely used in practice due to the lack of algorithms that can efficiently optimize it based on data. In this work, we present a new algorithm called Expected Information Maximization EIM for computing the Iprojection solely based on samples for general latent variable models, where we focus on Gaussian mixtures models and Gaussian mixtures of experts. Our approach applies a variational upper bound to the Iprojection objective which decomposes the original objective into single objectives for each mixture component as well as for the coefficients, allowing an efficient optimization. Similar to GANs, our approach employs discriminators but uses a more stable optimization procedure, using a tight upper bound. We show that our algorithm is much more effective in computing the Iprojection than recent GAN approaches and we illustrate the effectiveness of our approach for modelling multimodal behavior on two pedestrian and traffic prediction datasets.
Distinguishing freezing and thawing dark energy models through measurements of the finestructure constant ; Mapping the behaviour of dark energy is a pressing task for observational cosmology. Phenomenological classification divides dynamical dark energy models into freezing and thawing, depending on whether the dark energy equation of state is approaching or moving away from wprho1. Moreover, in realistic dynamical dark energy models the dynamical degree of freedom is expected to couple to the electromagnetic sector, leading to variations of the finestructure constant alpha. We discuss the feasibility of distinguishing between the freezing and thawing classes of models with current and forthcoming observational facilities and using a parametrisation of the dark energy equation of state, which can have either behaviour, introduced by Mukhanov as fiducial paradigm. We illustrate how freezing and thawing models lead to different redshift dependencies of alpha, and use a combination of current astrophysical observations and local experiments to constrain this class of models, improving the constraints on the key coupling parameter by more than a factor of two, despite considering a more extended parameter space than the one used in previous studies. We also briefly discuss the improvements expected from future facilities and comment on the practical limitations of this class of parametrisations. In particular, we show that sufficiently sensitive data can distinguish between freezing and thawing models, at least if one assumes that the relevant parameter space does not include phantom dark energy models.
Predicting the Impact of Electric Field Stimulation in a Detailed Computational Model of Cortical Tissue ; Neurostimulation using weak electric fields has generated excitement in recent years due to its potential as a medical intervention. However, study of this stimulation modality has been hampered by inconsistent results and large variability within and between studies. In order to begin addressing this variability we need to properly characterise the impact of the current on the underlying neuron populations. To develop and test a computational model capable of capturing the impact of electric field stimulation on networks of neurons. We construct a cortical tissue model with distinct layers and explicit neuron morphologies. We then apply a model of electrical stimulation and carry out multiple test case simulations. The cortical slice model is compared to experimental literature and shown to capture the main features of the electrophysiological response to stimulation. Namely, the model showed 1 a similar level of depolarisation in individual pyramidal neurons, 2 acceleration of intrinsic oscillations, and 3 retention of the spatial profile of oscillations in different layers. We then apply alternative electric fields to demonstrate how the model can capture differences in neuronal responses to the electric field. We demonstrate that the tissue response is dependent on layer depth, the angle of the apical dendrite relative to the field, and stimulation strength. We present publicly available computational modelling software that predicts the neuron network population response to electric field stimulation.
Unsupervised Gaze Prediction in Egocentric Videos by Energybased Surprise Modeling ; Egocentric perception has grown rapidly with the advent of immersive computing devices. Human gaze prediction is an important problem in analyzing egocentric videos and has primarily been tackled through either saliencybased modeling or highly supervised learning. We quantitatively analyze the generalization capabilities of supervised, deep learning models on the egocentric gaze prediction task on unseen, outofdomain data. We find that their performance is highly dependent on the training data and is restricted to the domains specified in the training annotations. In this work, we tackle the problem of jointly predicting human gaze points and temporal segmentation of egocentric videos without using any training data. We introduce an unsupervised computational model that draws inspiration from cognitive psychology models of event perception. We use Grenander's pattern theory formalism to represent spatialtemporal features and model surprise as a mechanism to predict gaze fixation points. Extensive evaluation on two publicly available datasets GTEA and GTEA datasetsshows that the proposed model can significantly outperform all unsupervised baselines and some supervised gaze prediction baselines. Finally, we show that the model can also temporally segment egocentric videos with a performance comparable to more complex, fully supervised deep learning baselines.
Random Partition Models for Microclustering Tasks ; Traditional Bayesian random partition models assume that the size of each cluster grows linearly with the number of data points. While this is appealing for some applications, this assumption is not appropriate for other tasks such as entity resolution, modeling of sparse networks, and DNA sequencing tasks. Such applications require models that yield clusters whose sizes grow sublinearly with the total number of data points the microclustering property. Motivated by these issues, we propose a general class of random partition models that satisfy the microclustering property with wellcharacterized theoretical properties. Our proposed models overcome major limitations in the existing literature on microclustering models, namely a lack of interpretability, identifiability, and full characterization of model asymptotic properties. Crucially, we drop the classical assumption of having an exchangeable sequence of data points, and instead assume an exchangeable sequence of clusters. In addition, our framework provides flexibility in terms of the prior distribution of cluster sizes, computational tractability, and applicability to a large number of microclustering tasks. We establish theoretical properties of the resulting class of priors, where we characterize the asymptotic behavior of the number of clusters and of the proportion of clusters of a given size. Our framework allows a simple and efficient Markov chain Monte Carlo algorithm to perform statistical inference. We illustrate our proposed methodology on the microclustering task of entity resolution, where we provide a simulation study and real experiments on survey panel data.
COVID19 Analytics Of Contagion On Inhomogeneous Random Social Networks ; Motivated by the need for novel robust approaches to modelling the Covid19 epidemic, this paper treats a population of N individuals as an inhomogeneous random social network IRSN. The nodes of the network represent different types of individuals and the edges represent significant social relationships. An epidemic is pictured as a contagion process that changes daily, triggered on day 0 by a seed infection introduced into the population. Individuals' social behaviour and health status are assumed to be random, with probability distributions that vary with their type. First a formulation and analysis is given for the basic SI susceptibleinfective network contagion model, which focusses on the cumulative number of people that have been infected. The main result is an analytical formula valid in the large N limit for the state of the system on day t in terms of the initial conditions. The formula involves only onedimensional integration. Next, more realistic SIR and SEIR network models, including removed R and exposed E classes, are formulated. These models also lead to analytical formulas that generalize the results for the SI network model. The framework can be easily adapted for analysis of different kinds of public health interventions, including vaccination, social distancing and quarantine. The formulas can be implemented numerically by an algorithm that efficiently incorporates the fast Fourier transform. Finally a number of open questions and avenues of investigation are suggested, such as the framework's relation to ordinary differential equation SIR models and agent based contagion models that are more commonly used in real world epidemic modelling.
One Model to Recognize Them All Marginal Distillation from NER Models with Different Tag Sets ; Named entity recognition NER is a fundamental component in the modern language understanding pipeline. Public NER resources such as annotated data and model services are available in many domains. However, given a particular downstream application, there is often no single NER resource that supports all the desired entity types, so users must leverage multiple resources with different tag sets. This paper presents a marginal distillation MARDI approach for training a unified NER model from resources with disjoint or heterogeneous tag sets. In contrast to recent works, MARDI merely requires access to pretrained models rather than the original training datasets. This flexibility makes it easier to work with sensitive domains like healthcare and finance. Furthermore, our approach is general enough to integrate with different NER architectures, including local models e.g., BiLSTM and global models e.g., CRF. Experiments on two benchmark datasets show that MARDI performs on par with a strong marginal CRF baseline, while being more flexible in the form of required NER resources. MARDI also sets a new state of the art on the progressive NER task. MARDI significantly outperforms the startoftheart model on the task of progressive NER.
Dynamical modelling of disc vertical structure in superthin galaxy UGC 7321' in braneworld gravity An MCMC study ; Low surface brightness LSBs superthins constitute classic examples of very latetype galaxies, with their disc dynamics strongly regulated by their dark matter halos. In this work we consider a gravitational origin of dark matter in the brane world scenario, where the higher dimensional Weyl stress term projected onto the 3brane acts as the source of dark matter. In the context of the braneworld model, this dark matter is referred to as the emphdark mass'.This model has been successful in reproducing the rotation curves of several low surface brightness and high surface brightness galaxies. Therefore it is interesting to study the prospect of this model in explaining the vertical structure of galaxies which has not been explored in the literature so far. Using our 2component model of gravitationallycoupled stars and gas in the external force field of this emphdark mass, we fit the observed scale heights of stellar and atomic hydrogen HI gas of superthin galaxy UGC7321' using the Markov Chain Monte Carlo approach. We find that the observed scaleheights of UGC7321' can be successfully modelled in the context of the braneworld scenario. In addition, the model predicted rotation curve also matches the observed one. The implications on the model parameters are discussed.
The Tajima heterochronous ncoalescent inference from heterochronously sampled molecular data ; The observed sequence variation at a locus informs about the evolutionary history of the sample and past population size dynamics. The Kingman coalescent is used in a generative model of molecular sequence variation to infer evolutionary parameters. However, it is well understood that inference under this model does not scale well with sample size. Here, we build on recent work based on a lower resolution coalescent process, the Tajima coalescent, to model longitudinal samples. While the Kingman coalescent models the ancestry of labeled individuals, the heterochronous Tajima coalescent models the ancestry of individuals labeled by their sampling time. We propose a new inference scheme for the reconstruction of effective population size trajectories based on this model with the potential to improve computational efficiency. Modeling of longitudinal samples is necessary for applications e.g. ancient DNA and RNA from rapidly evolving pathogens like viruses and statistically desirable variance reduction and parameter identifiability. We propose an efficient algorithm to calculate the likelihood and employ a Bayesian nonparametric procedure to infer the population size trajectory. We provide a new MCMC sampler to explore the space of heterochronous Tajima's genealogies and model parameters. We compare our procedure with stateoftheart methodologies in simulations and applications.
Improving Robot DualSystem Motor Learning with Intrinsically Motivated MetaControl and LatentSpace Experience Imagination ; Combining modelbased and modelfree learning systems has been shown to improve the sample efficiency of learning to perform complex robotic tasks. However, dualsystem approaches fail to consider the reliability of the learned model when it is applied to make multiplestep predictions, resulting in a compounding of prediction errors and performance degradation. In this paper, we present a novel dualsystem motor learning approach where a metacontroller arbitrates online between modelbased and modelfree decisions based on an estimate of the local reliability of the learned model. The reliability estimate is used in computing an intrinsic feedback signal, encouraging actions that lead to data that improves the model. Our approach also integrates arbitration with imagination where a learned latentspace model generates imagined experiences, based on its local reliability, to be used as additional training data. We evaluate our approach against baseline and stateoftheart methods on learning visionbased robotic grasping in simulation and real world. The results show that our approach outperforms the compared methods and learns nearoptimal grasping policies in dense and sparsereward environments.
A noncooperative metamodeling game for automated thirdparty calibrating, validating, and falsifying constitutive laws with parallelized adversarial attacks ; The evaluation of constitutive models, especially for highrisk and highregret engineering applications, requires efficient and rigorous thirdparty calibration, validation and falsification. While there are numerous efforts to develop paradigms and standard procedures to validate models, difficulties may arise due to the sequential, manual and often biased nature of the commonly adopted calibration and validation processes, thus slowing down data collections, hampering the progress towards discovering new physics, increasing expenses and possibly leading to misinterpretations of the credibility and application ranges of proposed models. This work attempts to introduce concepts from game theory and machine learning techniques to overcome many of these existing difficulties. We introduce an automated metamodeling game where two competing AI agents systematically generate experimental data to calibrate a given constitutive model and to explore its weakness, in order to improve experiment design and model robustness through competition. The two agents automatically search for the Nash equilibrium of the metamodeling game in an adversarial reinforcement learning framework without human intervention. By capturing all possible design options of the laboratory experiments into a single decision tree, we recast the design of experiments as a game of combinatorial moves that can be resolved through deep reinforcement learning by the two competing players. Our adversarial framework emulates idealized scientific collaborations and competitions among researchers to achieve a better understanding of the application range of the learned material laws and prevent misinterpretations caused by conventional AIbased thirdparty validation.
Learning a Formula of Interpretability to Learn Interpretable Formulas ; Many risksensitive applications require Machine Learning ML models to be interpretable. Attempts to obtain interpretable models typically rely on tuning, by trialanderror, hyperparameters of model complexity that are only loosely related to interpretability. We show that it is instead possible to take a metalearning approach an ML model of nontrivial Proxies of Human Interpretability PHIs can be learned from human feedback, then this model can be incorporated within an ML training process to directly optimize for interpretability. We show this for evolutionary symbolic regression. We first design and distribute a survey finalized at finding a link between features of mathematical formulas and two established PHIs, simulatability and decomposability. Next, we use the resulting dataset to learn an ML model of interpretability. Lastly, we query this model to estimate the interpretability of evolving solutions within biobjective genetic programming. We perform experiments on five synthetic and eight realworld symbolic regression problems, comparing to the traditional use of solution size minimization. The results show that the use of our model leads to formulas that are, for a same level of accuracyinterpretability tradeoff, either significantly more or equally accurate. Moreover, the formulas are also arguably more interpretable. Given the very positive results, we believe that our approach represents an important stepping stone for the design of nextgeneration interpretable evolutionary ML algorithms.
A Phase Transition in Arrow's Theorem ; Arrow's Theorem concerns a fundamental problem in social choice theory given the individual preferences of members of a group, how can they be aggregated to form rational group preferences Arrow showed that in an election between three or more candidates, there are situations where any voting rule satisfying a small list of natural fairness axioms must produce an apparently irrational intransitive outcome. Furthermore, quantitative versions of Arrow's Theorem in the literature show that when voters choose rankings in an i.i.d. fashion, the outcome is intransitive with nonnegligible probability. It is natural to ask if such a quantitative version of Arrow's Theorem holds for noni.i.d. models. To answer this question, we study Arrow's Theorem under a natural noni.i.d. model of voters inspired by canonical models in statistical physics; indeed, a version of this model was previously introduced by Raffaelli and Marsili in the physics literature. This model has a parameter, temperature, that prescribes the correlation between different voters. We show that the behavior of Arrow's Theorem in this model undergoes a striking phase transition in the entire high temperature regime of the model, a Quantitative Arrow's Theorem holds showing that the probability of paradox for any voting rule satisfying the axioms is nonnegligible; this is tight because the probability of paradox under pairwise majority goes to zero when approaching the critical temperature, and becomes exponentially small in the number of voters beyond it. We prove this occurs in another natural model of correlated voters and conjecture this phenomena is quite general.
Robust Question Answering Through Subpart Alignment ; Current textual question answering models achieve strong performance on indomain test sets, but often do so by fitting surfacelevel patterns in the data, so they fail to generalize to outofdistribution settings. To make a more robust and understandable QA system, we model question answering as an alignment problem. We decompose both the question and context into smaller units based on offtheshelf semantic representations here, semantic roles, and align the question to a subgraph of the context in order to find the answer. We formulate our model as a structured SVM, with alignment scores computed via BERT, and we can train endtoend despite using beam search for approximate inference. Our explicit use of alignments allows us to explore a set of constraints with which we can prohibit certain types of bad model behavior arising in crossdomain settings. Furthermore, by investigating differences in scores across different potential answers, we can seek to understand what particular aspects of the input lead the model to choose the answer without relying on posthoc explanation techniques. We train our model on SQuAD v1.1 and test it on several adversarial and outofdomain datasets. The results show that our model is more robust crossdomain than the standard BERT QA model, and constraints derived from alignment scores allow us to effectively trade off coverage and accuracy.
Localized active learning of Gaussian process state space models ; The performance of learningbased control techniques crucially depends on how effectively the system is explored. While most exploration techniques aim to achieve a globally accurate model, such approaches are generally unsuited for systems with unbounded state spaces. Furthermore, a globally accurate model is not required to achieve good performance in many common control applications, e.g., local stabilization tasks. In this paper, we propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the stateaction space. Our approach aims to maximize the mutual information of the exploration trajectories with respect to a discretization of the region of interest. By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy. To enable computational tractability, we decouple the choice of most informative data points from the model predictive control optimization step. This yields two optimization problems that can be solved in parallel. We apply the proposed method to explore the state space of various dynamical systems and compare our approach to a commonly used entropybased exploration strategy. In all experiments, our method yields a better model within the region of interest than the entropybased method.
Introducing PyCross PyCloudy Rendering Of Shape Software for pseudo 3D ionisation modelling of nebulae ; Research into the processes of photoionised nebulae plays a significant part in our understanding of stellar evolution. It is extremely difficult to visually represent or model ionised nebula, requiring astronomers to employ sophisticated modelling code to derive temperature, density and chemical composition. Existing codes are available that often require steep learning curves and produce models derived from mathematical functions. In this article we will introduce PyCross PyCloudy Rendering Of Shape Software. This is a pseudo 3D modelling application that generates photoionisation models of optically thin nebulae, created using the Shape software. Currently PyCross has been used for novae and planetary nebulae, and it can be extended to Active Galactic Nuclei or any other type of photoionised axisymmetric nebulae. Functionality, an operational overview, and a scientific pipeline will be described with scenarios where PyCross has been adopted for novae V5668 Sagittarii 2015 V4362 Sagittarii 1994 and a planetary nebula LoTr1. Unlike the aforementioned photoionised codes this application does not require any coding experience, nor the need to derive complex mathematical models, instead utilising the select features from CloudyPyCloudy and Shape. The software was developed using a formal software development lifecycle, written in Python and will work without the need to install any development environments or additional python packages. This application, Shape models and PyCross archive examples are freely available to students, academics and research community on GitHub for download httpsgithub.comkarolfitzgeraldPyCrossOSXApp.
Impacts of dark energy on constraining neutrino mass after Planck 2018 ; Considering the mass splittings of three active neutrinos, we investigate how the nature of dark energy affects the cosmological constraints on the total neutrino mass sum mnu using the latest cosmological observations. In this paper, some typical dark energy models, including LambdaCDM, wCDM, CPL, and HDE models, are discussed. In the analysis, we also consider the effects from the neutrino mass hierarchies, i.e., the degenerate hierarchy DH, the normal hierarchy NH, and the inverted hierarchy IH. We employ the current cosmological observations to do the analysis, including the Planck 2018 temperature and polarization power spectra, the baryon acoustic oscillations BAO, the type Ia supernovae SNe, and the Hubble constant H0 measurement. In the LambdaCDMsum mnu model, we obtain the upper limits of the neutrino mass sum mnu 0.123 eV DH, sum mnu 0.156 eV NH, and sum mnu 0.185 eV IH at the 95 C.L., using the PlanckBAOSNe data combination. For the wCDMsum mnu model and the CPLsum mnu model, larger upper limits of sum mnu are obtained compared to those of the LambdaCDMsum mnu model. The most stringent constraint on the neutrino mass, sum mnu0.080 eV DH, is derived in the HDEsum mnu model. In addition, we find that the inclusion of the local measurement of the Hubble constant in the data combination leads to tighter constraints on the total neutrino mass in all these dark energy models.
Process Knowledge Driven Change Point Detection for Automated Calibration of Discrete Event Simulation Models Using Machine Learning ; Initial development and subsequent calibration of discrete event simulation models for complex systems require accurate identification of dynamically changing process characteristics. Existing data driven change point methods DDCPD assume changes are extraneous to the system, thus cannot utilize available process knowledge. This work proposes a unified framework for processdriven multivariate change point detection PDCPD by combining change point detection models with machine learning and processdriven simulation modeling. The PDCPD, after initializing with DDCPD's change points, uses simulation models to generate system level outputs as timeseries data streams which are then used to train neural network models to predict system characteristics and change points. The accuracy of the predictive models measures the likelihood that the actual process data conforms to the simulated change points in system characteristics. PDCPD iteratively optimizes change points by repeating simulation and predictive model building steps until the set of change points with the maximum likelihood is identified. Using an emergency department case study, we show that PDCPD significantly improves change point detection accuracy over DDCPD estimates and is able to detect actual change points.
Tropical and Extratropical Cyclone Detection Using Deep Learning ; Extracting valuable information from large sets of diverse meteorological data is a timeintensive process. Machine learning methods can help improve both speed and accuracy of this process. Specifically, deep learning image segmentation models using the UNet structure perform faster and can identify areas missed by more restrictive approaches, such as expert handlabeling and a priori heuristic methods. This paper discusses four different stateoftheart UNet models designed for detection of tropical and extratropical cyclone Regions Of Interest ROI from two separate input sources total precipitable water output from the Global Forecasting System GFS model and water vapor radiance images from the Geostationary Operational Environmental Satellite GOES. These models are referred to as IBTrACSGFS, HeuristicGFS, IBTrACSGOES, and HeuristicGOES. All four UNets are fast information extraction tools and perform with a ROI detection accuracy ranging from 80 to 99. These are additionally evaluated with the Dice and Tversky Intersection over Union IoU metrics, having Dice coefficient scores ranging from 0.51 to 0.76 and Tversky coefficients ranging from 0.56 to 0.74. The extratropical cyclone UNet model performed 3 times faster than the comparable heuristic model used to detect the same ROI. The UNets were specifically selected for their capabilities in detecting cyclone ROI beyond the scope of the training labels. These machine learning models identified more ambiguous and active ROI missed by the heuristic model and handlabeling methods commonly used in generating realtime weather alerts, having a potentially direct impact on public safety.
Scalable PrivacyPreserving Distributed Learning ; In this paper, we address the problem of privacypreserving distributed learning and the evaluation of machinelearning models by analyzing it in the widespread MapReduce abstraction that we extend with privacy constraints. We design SPINDLE Scalable PrivacypreservINg Distributed LEarning, the first distributed and privacypreserving system that covers the complete ML workflow by enabling the execution of a cooperative gradientdescent and the evaluation of the obtained model and by preserving data and model confidentiality in a passiveadversary model with up to N1 colluding parties. SPINDLE uses multiparty homomorphic encryption to execute parallel highdepth computations on encrypted data without significant overhead. We instantiate SPINDLE for the training and evaluation of generalized linear models on distributed datasets and show that it is able to accurately on par with nonsecure centrallytrained models and efficiently due to a multilevel parallelization of the computations train models that require a high number of iterations on large input data with thousands of features, distributed among hundreds of data providers. For instance, it trains a logisticregression model on a dataset of one million samples with 32 features distributed among 160 data providers in less than three minutes.
Noise Robust TTS for Low Resource Speakers using Pretrained Model and Speech Enhancement ; With the popularity of deep neural network, speech synthesis task has achieved significant improvements based on the endtoend encoderdecoder framework in the recent days. More and more applications relying on speech synthesis technology have been widely used in our daily life. Robust speech synthesis model depends on high quality and customized data which needs lots of collecting efforts. It is worth investigating how to take advantage of lowquality and low resource voice data which can be easily obtained from the Internet for usage of synthesizing personalized voice. In this paper, the proposed endtoend speech synthesis model uses both speaker embedding and noise representation as conditional inputs to model speaker and noise information respectively. Firstly, the speech synthesis model is pretrained with both multispeaker clean data and noisy augmented data; then the pretrained model is adapted on noisy lowresource new speaker data; finally, by setting the clean speech condition, the model can synthesize the new speaker's clean voice. Experimental results show that the speech generated by the proposed approach has better subjective evaluation results than the method directly finetuning pretrained multispeaker speech synthesis model with denoised new speaker data.
Defense for Blackbox Attacks on Antispoofing Models by SelfSupervised Learning ; Highperformance antispoofing models for automatic speaker verification ASV, have been widely used to protect ASV by identifying and filtering spoofing audio that is deliberately generated by texttospeech, voice conversion, audio replay, etc. However, it has been shown that highperformance antispoofing models are vulnerable to adversarial attacks. Adversarial attacks, that are indistinguishable from original data but result in the incorrect predictions, are dangerous for antispoofing models and not in dispute we should detect them at any cost. To explore this issue, we proposed to employ Mockingjay, a selfsupervised learning based model, to protect antispoofing models against adversarial attacks in the blackbox scenario. Selfsupervised learning models are effective in improving downstream task performance like phone classification or ASR. However, their effect in defense for adversarial attacks has not been explored yet. In this work, we explore the robustness of selfsupervised learned highlevel representations by using them in the defense against adversarial attacks. A layerwise noise to signal ratio LNSR is proposed to quantize and measure the effectiveness of deep models in countering adversarial noise. Experimental results on the ASVspoof 2019 dataset demonstrate that highlevel representations extracted by Mockingjay can prevent the transferability of adversarial examples, and successfully counter blackbox attacks.
Semantic Loss Application to Entity Relation Recognition ; Usually, entity relation recognition systems either use a pipelined model that treats the entity tagging and relation identification as separate tasks or a joint model that simultaneously identifies the relation and entities. This paper compares these two general approaches for the entity relation recognition. Stateoftheart entity relation recognition systems are built using deep recurrent neural networks which often does not capture the symbolic knowledge or the logical constraints in the problem. The main contribution of this paper is an endtoend neural model for joint entity relation extraction which incorporates a novel loss function. This novel loss function encodes the constraint information in the problem to guide the model training effectively. We show that addition of this loss function to the existing typical loss functions has a positive impact over the performance of the models. This model is truly endtoend, requires no feature engineering and easily extensible. Extensive experimentation has been conducted to evaluate the significance of capturing symbolic knowledge for natural language understanding. Models using this loss function are observed to be outperforming their counterparts and converging faster. Experimental results in this work suggest the use of this methodology for other language understanding applications.
Roses Are Red, Violets Are Blue... but Should Vqa Expect Them To ; Models for Visual Question Answering VQA are notorious for their tendency to rely on dataset biases, as the large and unbalanced diversity of questions and concepts involved and tends to prevent models from learning to reason, leading them to perform educated guesses instead. In this paper, we claim that the standard evaluation metric, which consists in measuring the overall indomain accuracy, is misleading. Since questions and concepts are unbalanced, this tends to favor models which exploit subtle training set statistics. Alternatively, naively introducing artificial distribution shifts between train and test splits is also not completely satisfying. First, the shifts do not reflect realworld tendencies, resulting in unsuitable models; second, since the shifts are handcrafted, trained models are specifically designed for this particular setting, and do not generalize to other configurations. We propose the GQAOOD benchmark designed to overcome these concerns we measure and compare accuracy over both rare and frequent questionanswer pairs, and argue that the former is better suited to the evaluation of reasoning abilities, which we experimentally validate with models trained to more or less exploit biases. In a largescale study involving 7 VQA models and 3 bias reduction techniques, we also experimentally demonstrate that these models fail to address questions involving infrequent concepts and provide recommendations for future directions of research.
Data Augmentation for Training Dialog Models Robust to Speech Recognition Errors ; Speechbased virtual assistants, such as Amazon Alexa, Google assistant, and Apple Siri, typically convert users' audio signals to text data through automatic speech recognition ASR and feed the text to downstream dialog models for natural language understanding and response generation. The ASR output is errorprone; however, the downstream dialog models are often trained on errorfree text data, making them sensitive to ASR errors during inference time. To bridge the gap and make dialog models more robust to ASR errors, we leverage an ASR error simulator to inject noise into the errorfree text data, and subsequently train the dialog models with the augmented data. Compared to other approaches for handling ASR errors, such as using ASR lattice or endtoend methods, our data augmentation approach does not require any modification to the ASR or downstream dialog models; our approach also does not introduce any additional latency during inference time. We perform extensive experiments on benchmark data and show that our approach improves the performance of downstream dialog models in the presence of ASR errors, and it is particularly effective in the lowresource situations where there are constraints on model size or the training data is scarce.
Constraints on Multicomponent Dark Energy from Cosmological Observations ; Dark energy DE plays an important role in the expansion history of our universe. But we only got limited knowledge about its nature and properties after decades of study.In most numerical researches, DE is usually considered as a dynamical whole. Actually, multicomponent DE models can also explain the accelerating expansion of our universe, which is accepted theoretically but lack of numerical researches. We try to study the multicomponent DE models from observation by constructing wnCDM models. The total energy density of DE is separated into n n2,3,5 parts equally and every part has a constant EOS wi i1,2...n. We modify the Friedmann equation and the parameterized postFriedmann description of DE, then put constraints on wis from Planck 2018 TT,TE,EElowElensing, BAO data and PANTHEON samples. The multicomponent DE models are favoured if any wnCDM model is preferred by observational data and there is no overlap between the highest and lowest values of wis. We find the data combination supports the wnCDM model when n is small and the w2CDM model is slightly preferred by Delta chi2textmin Delta textAIC Delta textBIC 2.48 over the CPL model, but the largest value of wi overlaps the smallest one. With larger n, the maximum and minimum of wis do not overlap with each other, but chi2textmin and AIC also increase. In brief, we find no obvious evidence that DE is composed of different components.
Hidden Markov Models Applied To Intraday Momentum Trading With Side Information ; A Hidden Markov Model for intraday momentum trading is presented which specifies a latent momentum state responsible for generating the observed securities' noisy returns. Existing momentum trading models suffer from timelagging caused by the delayed frequency response of digital filters. Timelagging results in a momentum signal of the wrong sign, when the market changes trend direction. A key feature of this state space formulation, is no such lagging occurs, allowing for accurate shifts in signal sign at market change points. The number of latent states in the model is estimated using three techniques, cross validation, penalized likelihood criteria and simulationbased model selection for the marginal likelihood. All three techniques suggest either 2 or 3 hidden states. Model parameters are then found using BaumWelch and Markov Chain Monte Carlo, whilst assuming a single discretized univariate Gaussian distribution for the emission matrix. Often a momentum trader will want to condition their trading signals on additional information. To reflect this, learning is also carried out in the presence of side information. Two sets of side information are considered, namely a ratio of realized volatilities and intraday seasonality. It is shown that splines can be used to capture statistically significant relationships from this information, allowing returns to be predicted. An Input Output Hidden Markov Model is used to incorporate these univariate predictive signals into the transition matrix, presenting a possible solution for dealing with the signal combination problem. Bayesian inference is then carried out to predict the securities t1 return using the forward algorithm. Simple modifications to the current framework allow for a fully nonparametric model with asynchronous prediction.
Modeling and Controlling the Spread of Epidemic with Various Social and Economic Scenarios ; We propose a dynamical model for describing the spread of epidemics. This model is an extension of the SIQR susceptibleinfectedquarantinedrecovered and SIRP susceptibleinfectedrecoveredpathogen models used earlier to describe various scenarios of epidemic spreading. As compared to the basic SIR model, our model takes into account two possible routes of contagion transmission direct from the infected compartment to the susceptible compartment and indirect via some intermediate medium or fomites. Transmission rates are estimated in terms of average distances between the individuals in selected social environments and characteristic time spans for which the individuals stay in each of these environments. We also introduce a collective economic resource associated with the average amount of money or income per individual to describe the socioeconomic interplay between the spreading process and the resource available to infected individuals. The epidemicresource coupling is supposed to be of activation type, with the recovery rate governed by the Arrheniuslike law. Our model brings an advantage of building various control strategies to mitigate the effect of epidemic and can be applied, in particular, to modeling the spread of COVID19.
Efficient Execution of Quantized Deep Learning Models A Compiler Approach ; A growing number of applications implement predictive functions using deep learning models, which require heavy use of compute and memory. One popular technique for increasing resource efficiency is 8bit integer quantization, in which 32bit floating point numbers fp32 are represented using shorter 8bit integer numbers. Although deep learning frameworks such as TensorFlow, TFLite, MXNet, and PyTorch enable developers to quantize models with only a small drop in accuracy, they are not well suited to execute quantized models on a variety of hardware platforms. For example, TFLite is optimized to run inference on ARM CPU edge devices but it does not have efficient support for Intel CPUs and Nvidia GPUs. In this paper, we address the challenges of executing quantized deep learning models on diverse hardware platforms by proposing an augmented compiler approach. A deep learning compiler such as Apache TVM can enable the efficient execution of model from various frameworks on various targets. Many deep learning compilers today, however, are designed primarily for fp32 computation and cannot optimize a prequantized INT8 model. To address this issue, we created a new dialect called Quantized Neural Network QNN that extends the compiler's internal representation with a quantization context. With this quantization context, the compiler can generate efficient code for prequantized models on various hardware platforms. As implemented in Apache TVM, we observe that the QNNaugmented deep learning compiler achieves speedups of 2.35x, 2.15x, 1.35x and 1.40x on Intel Xeon Cascade Lake CPUs, Nvidia Tesla T4 GPUs, ARM Raspberry Pi3 and Pi4 respectively against well optimized fp32 execution, and comparable performance to the stateoftheart frameworkspecific solutions.
Towards the Classification of TachyonFree Models From Tachyonic TenDimensional Heterotic String Vacua ; Recently it was proposed that tendimensional tachyonic string vacua may serve as starting points for the construction of viable four dimensional phenomenological string models which are tachyon free. This is achieved by projecting out the tachyons in the fourdimensional models using projectors other than the projector which is utilised in the supersymmetric models and those of the SO16times SO16 heterotic string. We continue the exploration of this class of models by developing systematic computerised tools for their classification, the analysis of their tachyonic and massless spectra, as well as analysis of their partition functions and vacuum energy. We explore a randomly generated space of 2times109 string vacua in this class and find that tachyonfree models occur with sim 5times 103 probability, and of those, phenomenologically inclined SO10 vacua with a00Nb0Nf00, i.e. equal number of fermionic and bosonic massless states, occur with frequency sim 2times 106. Extracting larger numbers of phenomenological vacua therefore requires adaptation of fertility conditions that we discuss, and significantly increase the frequency of tachyonfree models. Our results suggest that spacetime supersymmetry may not be a necessary ingredient in phenomenological string models, even at the Planck scale.
Webs of integrable theories ; We present an intuitive diagrammatic representation of a new class of integrable smodels. It is shown that to any given diagram corresponds an integrable theory that couples N WZW models with a certain number of each of the following four fundamental integrable models, the PCM, the YB model, both based on a group G, the isotropic smodel on the symmetric space GH and the YB model on the symmetric space GH. To each vertex of a diagram we assign the matrix of one of the aforementioned fundamental integrable theories. Any two vertices may be connected with a number of lines having an orientation and carrying an integer level ki. Each of these lines is associated with an asymmetrically gauged WZW model at an arbitrary level ki. Gauge invariance of the full action is translated to level conservation at the vertices. We also show how to immediately read from the diagrams the corresponding smodel actions. The most generic of these models depends on at least n21 parameters, where n is the total number of verticesfundamental integrable models. Finally, we discuss the case where the level conservation at the vertices is relaxed and the case where the deformation matrix is not diagonal in the space of integrable models.
Learning dynamics for improving control of overactuated flying systems ; Overactuated omnidirectional flying vehicles are capable of generating force and torque in any direction, which is important for applications such as contactbased industrial inspection. This comes at the price of an increase in model complexity. These vehicles usually have nonnegligible, repetitive dynamics that are hard to model, such as the aerodynamic interference between the propellers. This makes it difficult for highperformance trajectory tracking using a modelbased controller. This paper presents an approach that combines a datadriven and a firstprinciple model for the system actuation and uses it to improve the controller. In a first step, the firstprinciple model errors are learned offline using a Gaussian Process GP regressor. At runtime, the firstprinciple model and the GP regressor are used jointly to obtain control commands. This is formulated as an optimization problem, which avoids ambiguous solutions present in a standard inverse model in overactuated systems, by only using forward models. The approach is validated using a tiltarm overactuated omnidirectional flying vehicle performing attitude trajectory tracking. The results show that with our proposed method, the attitude trajectory error is reduced by 32 on average as compared to a nominal PID controller.
Incremental Calibration of Architectural Performance Models with Parametric Dependencies ; Architecturebased Performance Prediction AbPP allows evaluation of the performance of systems and to answer whatif questions without measurements for all alternatives. A difficulty when creating models is that Performance Model Parameters PMPs, such as resource demands, loop iteration numbers and branch probabilities depend on various influencing factors like input data, used hardware and the applied workload. To enable a broad range of whatif questions, Performance Models PMs need to have predictive power beyond what has been measured to calibrate the models. Thus, PMPs need to be parametrized over the influencing factors that may vary. Existing approaches allow for the estimation of parametrized PMPs by measuring the complete system. Thus, they are too costly to be applied frequently, up to after each code change. They do not keep also manual changes to the model when recalibrating. In this work, we present the Continuous Integration of Performance Models CIPM, which incrementally extracts and calibrates the performance model, including parametric dependencies. CIPM responds to source code changes by updating the PM and adaptively instrumenting the changed parts. To allow AbPP, CIPM estimates the parametrized PMPs using the measurements generated by performance tests or executing the system in production and statistical analysis, e.g., regression analysis and decision trees. Additionally, our approach responds to production changes e.g., load or deployment changes and calibrates the usage and deployment parts of PMs accordingly. For the evaluation, we used two case studies. Evaluation results show that we were able to calibrate the PM incrementally and accurately.
Weighted hypersoft configuration model ; Maximum entropy null models of networks come in different flavors that depend on the type of constraints under which entropy is maximized. If the constraints are on degree sequences or distributions, we are dealing with configuration models. If the degree sequence is constrained exactly, the corresponding microcanonical ensemble of random graphs with a given degree sequence is the configuration model per se. If the degree sequence is constrained only on average, the corresponding grandcanonical ensemble of random graphs with a given expected degree sequence is the soft configuration model. If the degree sequence is not fixed at all but randomly drawn from a fixed distribution, the corresponding hypercanonical ensemble of random graphs with a given degree distribution is the hypersoft configuration model, a more adequate description of dynamic realworld networks in which degree sequences are never fixed but degree distributions often stay stable. Here, we introduce the hypersoft configuration model of weighted networks. The main contribution is a particular version of the model with powerlaw degree and strength distributions, and superlinear scaling of strengths with degrees, mimicking the properties of some realworld networks. As a byproduct, we generalize the notions of sparse graphons and their entropy to weighted networks.
NonHomogeneous Poisson Process Intensity Modeling and Estimation using Measure Transport ; Nonhomogeneous Poisson processes are used in a wide range of scientific disciplines, ranging from the environmental sciences to the health sciences. Often, the central object of interest in a point process is the underlying intensity function. Here, we present a general model for the intensity function of a nonhomogeneous Poisson process using measure transport. The model is built from a flexible bijective mapping that maps from the underlying intensity function of interest to a simpler reference intensity function. We enforce bijectivity by modeling the map as a composition of multiple simple bijective maps, and show that the model exhibits an important approximation property. Estimation of the flexible mapping is accomplished within an optimization framework, wherein computations are efficiently done using recent technological advances in deep learning and a graphics processing unit. Although we find that intensity function estimates obtained with our method are not necessarily superior to those obtained using conventional methods, the modeling representation brings with it other advantages such as facilitated point process simulation and uncertainty quantification. Modeling point processes in higher dimensions is also facilitated using our approach. We illustrate the use of our model on both simulated data, and a real data set containing the locations of seismic events near Fiji since 1964.
Constructed measures and causal inference towards a new model of measurement for psychosocial constructs ; Psychosocial constructs can only be assessed indirectly, and measures are typically formed by a combination of indicators that are thought to relate to the construct. Reflective and formative measurement models offer different conceptualizations of the relation between the indicators and what is sometimes conceived of as a univariate latent variable supposed to correspond in some way to the construct. It is argued that the empirical implications of reflective and formative models will often be violated by data since the causally relevant constituents will generally be multivariate, not univariate. These empirical implications can be formally tested but factor analysis is not adequate to do so. It is argued that formative models misconstrue the relationship between the constructed measures and the underlying reality by which causal processes operate, but that reflective models misconstrue the nature of the underlying reality itself by typically presuming that the constituents of it that are causally efficacious are unidimensional. The ensuing problems arising from these misconstruals are discussed. A causal interpretation is proposed of associations between constructed measures and various outcomes that is applicable to both reflective and formative models and is applicable even if the usual assumptions of these models are violated. An outline for a new model of the process of measure construction is put forward. Discussion is given to the practical implications of these observations and proposals for the provision of definitions, the selection of items, itembyitem analyses, the construction of measures, and the interpretation of the associations of these measures with subsequent outcomes.
Multicharged TeV scale scalars and fermions in the framework of a radiative seesaw model ; Explaining the tiny neutrino masses and nonzero mixings have been one of the key motivations for going beyond the framework of the Standard Model SM. We discuss a collider testable model for generating neutrino masses and mixings via radiative seesaw mechanism. That the model does not require any additional symmetry to forbid treelevel seesaws makes its collider phenomenology interesting. The model includes multicharged fermionsscalars at the TeV scale to realize the Weinberg operator at 1loop level. After deriving the constraints on the model parameters resulting from the neutrino oscillation data as well as from the upper bound on the absolute neutrino mass scale, we discuss the production, decay and resulting collider signatures of these TeV scale fermionsscalars at the Large Hadron Collider LHC. We consider both DrellYan and photoproduction. The bounds from the neutrino data indicate the possible presence of a longlived multicharged particle MCP in this model. We obtain bounds on these longlived MCP masses from the ATLAS search for abnormally large ionization signature. When the TeV scale fermionsscalars undergo prompt decay, we focus on the 4lepton final states and obtain bounds from different ATLAS 4lepton searches. We also propose a 4lepton event selection criteria designed to enhance the signal to background ratio in the context of this model.
Deep Neural Networks with Koopman Operators for Modeling and Control of Autonomous Vehicles ; Autonomous driving technologies have received notable attention in the past decades. In autonomous driving systems, identifying a precise dynamical model for motion control is nontrivial due to the strong nonlinearity and uncertainty in vehicle dynamics. Recent efforts have resorted to machine learning techniques for building vehicle dynamical models, but the generalization ability and interpretability of existing methods still need to be improved. In this paper, we propose a datadriven vehicle modeling approach based on deep neural networks with an interpretable Koopman operator. The main advantage of using the Koopman operator is to represent the nonlinear dynamics in a linear lifted feature space. In the proposed approach, a deep learningbased extended dynamic mode decomposition algorithm is presented to learn a finitedimensional approximation of the Koopman operator. Furthermore, a datadriven model predictive controller with the learned Koopman model is designed for path tracking control of autonomous vehicles. Simulation results in a highfidelity CarSim environment show that our approach exhibit a high modeling precision at a wide operating range and outperforms previously developed methods in terms of modeling performance. Path tracking tests of the autonomous vehicle are also performed in the CarSim environment and the results show the effectiveness of the proposed approach.
Bridging the COVID19 Data and the Epidemiological Model using Time Varying Parameter SIRD Model ; This paper extends the canonical model of epidemiology, SIRD model, to allow for time varying parameters for realtime measurement of the stance of the COVID19 pandemic. Time variation in model parameters is captured using the generalized autoregressive score modelling structure designed for the typically daily count data related to pandemic. The resulting specification permits a flexible yet parsimonious model structure with a very low computational cost. This is especially crucial at the onset of the pandemic when the data is scarce and the uncertainty is abundant. Full sample results show that countries including US, Brazil and Russia are still not able to contain the pandemic with the US having the worst performance. Furthermore, Iran and South Korea are likely to experience the second wave of the pandemic. A realtime exercise show that the proposed structure delivers timely and precise information on the current stance of the pandemic ahead of the competitors that use rolling window. This, in turn, transforms into accurate shortterm predictions of the active cases. We further modify the model to allow for unreported cases. Results suggest that the effects of the presence of these cases on the estimation results diminish towards the end of sample with the increasing number of testing.
Probabilistic Prediction of Geomagnetic Storms and the Ktextrmp Index ; Geomagnetic activity is often described using summary indices to summarize the likelihood of space weather impacts, as well as when parameterizing space weather models. The geomagnetic index textKtextp in particular, is widely used for these purposes. Current stateoftheart forecast models provide deterministic textKtextp predictions using a variety of methods including empiricallyderived functions, physicsbased models, and neural networks but do not provide uncertainty estimates associated with the forecast. This paper provides a sample methodology to generate a 3hourahead textKtextp prediction with uncertainty bounds and from this provide a probabilistic geomagnetic storm forecast. Specifically, we have used a twolayered architecture to separately predict storm textKtextpgeq 5 and nonstorm cases. As solar winddriven models are limited in their ability to predict the onset of transientdriven activity we also introduce a model variant using solar Xray flux to assess whether simple models including proxies for solar activity can improve the predictions of geomagnetic storm activity with lead times longer than the L1toEarth propagation time. By comparing the performance of these models we show that including operationallyavailable information about solar irradiance enhances the ability of predictive models to capture the onset of geomagnetic storms and that this can be achieved while also enabling probabilistic forecasts.
Stellar Characterization of Keck HIRES Spectra with The Cannon ; To accurately interpret the observed properties of exoplanets, it is necessary to first obtain a detailed understanding of host star properties. However, physical models that analyze stellar properties on a perstar basis can become computationally intractable for sufficiently large samples. Furthermore, these models are limited by the wavelength coverage of available spectra. We combine previously derived spectral properties from the Spectroscopic Properties of Cool Stars SPOCS catalog Brewer et al. 2016 with generative modeling using The Cannon to produce a model capable of deriving stellar parameters log g, Tmathrmeff, and vsin i and 15 elemental abundances C, N, O, Na, Mg, Al, Si, Ca, Ti, V, Cr, Mn, Fe, Ni, and Y for stellar spectra observed with Keck Observatory's High Resolution Echelle Spectrometer HIRES. We demonstrate the high accuracy and precision of our model, which takes just sim3 seconds to classify each star, through crossvalidation with prelabeled spectra from the SPOCS sample. Our trained model, which takes continuumnormalized template spectra as its inputs, is publicly available at httpsgithub.commalenaricekeckspec. Finally, we interpolate our spectra and employ the same modeling scheme to recover labels for 477 stars using archival stellar spectra obtained prior to Keck's 2004 detector upgrade, demonstrating that our interpolated model can successfully predict stellar labels for different spectrographs that have 1 sufficiently similar systematics and 2 a wavelength range that substantially overlaps with that of the post2004 HIRES spectra.
Numerical evidence for manybody localization in two and three dimensions ; Disorder and interactions can lead to the breakdown of statistical mechanics in certain quantum systems, a phenomenon known as manybody localization MBL. Much of the phenomenology of MBL emerges from the existence of ellbits, a set of conserved quantities that are quasilocal and binary i.e., possess only pm 1 eigenvalues. While MBL and ellbits are known to exist in onedimensional systems, their existence in dimensions greater than one is a key open question. To tackle this question, we develop an algorithm that can find approximate binary ellbits in arbitrary dimensions by adaptively generating a basis of operators in which to represent the ellbit. We use the algorithm to study four models the one, two, and threedimensional disordered Heisenberg models and the twodimensional disordered hardcore BoseHubbard model. For all four of the models studied, our algorithm finds highquality ellbits at large disorder strength and rapid qualitative changes in the distributions of ellbits in particular ranges of disorder strengths, suggesting the existence of MBL transitions. These transitions in the onedimensional Heisenberg model and twodimensional BoseHubbard model coincide well with past estimates of the critical disorder strengths in these models which further validates the evidence of MBL phenomenology in the other two and threedimensional models we examine. In addition to finding MBL behavior in higher dimensions, our algorithm can be used to probe MBL in various geometries and dimensionality.
PaMIR Parametric ModelConditioned Implicit Representation for Imagebased Human Reconstruction ; Modeling 3D humans accurately and robustly from a single image is very challenging, and the key for such an illposed problem is the 3D representation of the human models. To overcome the limitations of regular 3D representations, we propose Parametric ModelConditioned Implicit Representation PaMIR, which combines the parametric body model with the freeform deep implicit function. In our PaMIRbased reconstruction framework, a novel deep neural network is proposed to regularize the freeform deep implicit function using the semantic features of the parametric model, which improves the generalization ability under the scenarios of challenging poses and various clothing topologies. Moreover, a novel depthambiguityaware training loss is further integrated to resolve depth ambiguities and enable successful surface detail reconstruction with imperfect body reference. Finally, we propose a body reference optimization method to improve the parametric model estimation accuracy and to enhance the consistency between the parametric model and the implicit function. With the PaMIR representation, our framework can be easily extended to multiimage input scenarios without the need of multicamera calibration and pose synchronization. Experimental results demonstrate that our method achieves stateoftheart performance for imagebased 3D human reconstruction in the cases of challenging poses and clothing types.
Informational properties for EinsteinMaxwellDilaton Gravity ; We study the information quantities, including the holographic entanglement entropy HEE, mutual information MI and entanglement of purification EoP, over GubserRocha model. The remarkable property of this model is the zero entropy density at ground state, in term of which we expect to extract novel, even singular informational properties in zero temperature limit. Surprisedly, we do not observe any singular behavior of entanglementrelated physical quantities under the zero temperature limit. Nevertheless, we find a peculiar property from GubserRocha model that in low temperature region, the HEE decreases with the increase of temperature, which is contrary to that in most holographic models. We argue that this novel phenomenon is brought by the singular property of the zero temperature limit, of which the analytical verification is present. In addition, we also compare the features of the information quantities in GubserRocha model with those in ReissnerNordstrom Antide Sitter RNAdS black hole model. It is shown that the HEE and MI of GubserRocha model are always larger than those of RNAdS model, while the EoP behaves in an opposite way. Our results indicate that MI and EoP could have different abilities in describing mixed state entanglement.
On Predicting Personal Values of Social Media Users using CommunitySpecific Language Features and Personal Value Correlation ; Personal values have significant influence on individuals' behaviors, preferences, and decision making. It is therefore not a surprise that personal values of a person could influence his or her social media content and activities. Instead of getting users to complete personal value questionnaire, researchers have looked into a nonintrusive and highly scalable approach to predict personal values using usergenerated social media data. Nevertheless, geographical differences in word usage and profile information are issues to be addressed when designing such prediction models. In this work, we focus on analyzing Singapore users' personal values, and developing effective models to predict their personal values using their Facebook data. These models leverage on word categories in Linguistic Inquiry and Word Count LIWC and correlations among personal values. The LIWC word categories are adapted to nonEnglish word use in Singapore. We incorporate the correlations among personal values into our proposed Stack Model consisting of a taskspecific layer of base models and a crossstitch layer model. Through experiments, we show that our proposed model predicts personal values with considerable improvement of accuracy over the previous works. Moreover, we use the stack model to predict the personal values of a large community of Twitter users using their public tweet content and empirically derive several interesting findings about their online behavior consistent with earlier findings in the social science and social media literature.
Inference for partially observed epidemic dynamics guided by Kalman filtering techniques ; Despite the recent development of methods dealing with partially observed epidemic dynamics unobserved model coordinates, discrete and noisy outbreak data, limitations remain in practice, mainly related to the quantity of augmented data and calibration of numerous tuning parameters. In particular, as coordinates of dynamic epidemic models are coupled, the presence of unobserved coordinates leads to a statistically difficult problem. The aim is to propose an easytouse and general inference method that is able to tackle these issues. First, using the properties of epidemics in large populations, a twolayer model is constructed. Via a diffusionbased approach, a Gaussian approximation of the epidemic densitydependent Markovian jump process is obtained, representing the state model. The observational model, consisting of noisy observations of certain model coordinates, is approximated by Gaussian distributions. Then, an inference method based on an approximate likelihood using Kalman filtering recursion is developed to estimate parameters of both the state and observational models. The performance of estimators of key model parameters is assessed on simulated data of SIR epidemic dynamics for different scenarios with respect to the population size and the number of observations. This performance is compared with that obtained using the wellknown maximum iterated filtering method. Finally, the inference method is applied to a real data set on an influenza outbreak in a British boarding school in 1978.
Optimal Bayesian estimation of Gaussian mixtures with growing number of components ; We study Bayesian estimation of finite mixture models in a general setup where the number of components is unknown and allowed to grow with the sample size. An assumption on growing number of components is a natural one as the degree of heterogeneity present in the sample can grow and new components can arise as sample size increases, allowing full flexibility in modeling the complexity of data. This however will lead to a highdimensional model which poses great challenges for estimation. We novelly employ the idea of a sample size dependent prior in a Bayesian model and establish a number of important theoretical results. We first show that under mild conditions on the prior, the posterior distribution concentrates around the true mixing distribution at a near optimal rate with respect to the Wasserstein distance. Under a separation condition on the true mixing distribution, we further show that a better and adaptive convergence rate can be achieved, and the number of components can be consistently estimated. Furthermore, we derive optimal convergence rates for the higherorder mixture models where the number of components diverges arbitrarily fast. In addition, we suggest a simple recipe for using Dirichlet process DP mixture prior for estimating the finite mixture models and provide theoretical guarantees. In particular, we provide a novel solution for adopting the number of clusters in a DP mixture model as an estimate of the number of components in a finite mixture model. Simulation study and real data applications are carried out demonstrating the utilities of our method.
Testing goodnessoffit and conditional independence with approximate cosufficient sampling ; Goodnessoffit GoF testing is ubiquitous in statistics, with direct ties to model selection, confidence interval construction, conditional independence testing, and multiple testing, just to name a few applications. While testing the GoF of a simple point null hypothesis provides an analyst great flexibility in the choice of test statistic while still ensuring validity, most GoF tests for composite null hypotheses are far more constrained, as the test statistic must have a tractable distribution over the entire null model space. A notable exception is cosufficient sampling CSS resampling the data conditional on a sufficient statistic for the null model guarantees valid GoF testing using any test statistic the analyst chooses. But CSS testing requires the null model to have a compact in an informationtheoretic sense sufficient statistic, which only holds for a very limited class of models; even for a null model as simple as logistic regression, CSS testing is powerless. In this paper, we leverage the concept of approximate sufficiency to generalize CSS testing to essentially any parametric model with an asymptoticallyefficient estimator; we call our extension approximate CSS aCSS testing. We quantify the finitesample Type I error inflation of aCSS testing and show that it is vanishing under standard maximum likelihood asymptotics, for any choice of test statistic. We apply our proposed procedure both theoretically and in simulation to a number of models of interest to demonstrate its finitesample Type I error and power.
Inner Models from Extended Logics Part 1 ; If we replace first order logic by second order logic in the original definition of Godel's inner model L, we obtain HOD. In this paper we consider inner models that arise if we replace first order logic by a logic that has some, but not all, of the strength of second order logic. Typical examples are the extensions of first order logic by generalized quantifiers, such as the MagidorMalitz quantifier, the cofinality quantifier, or stationary logic. Our first set of results show that both L and HOD manifest some amount of em formalism freeness in the sense that they are not very sensitive to the choice of the underlying logic. Our second set of results shows that the cofinality quantifier gives rise to a new robust inner model between L and HOD. We show, among other things, that assuming a proper class of Woodin cardinals the regular cardinals aleph1 of V are weakly compact in the inner model arising from the cofinality quantifier and the theory of that model is set forcing absolute and independent of the cofinality in question. We do not know whether this model satisfies the Continuum Hypothesis, assuming large cardinals, but we can show, assuming three Woodin cardinals and a measurable above them, that if the construction is relativized to a real, then on a cone of reals the Continuum Hypothesis is true in the relativized model.
Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction ; Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces. However, they can only produce static surfaces that are not controllable, which provides limited ability to modify the resulting model by editing its pose or shape parameters. Nevertheless, such features are essential in building flexible models for both computer graphics and computer vision. In this work, we present methodology that combines detailrich implicit functions and parametric representations in order to reconstruct 3D models of people that remain controllable and accurate even in the presence of clothing. Given sparse 3D point clouds sampled on the surface of a dressed person, we use an Implicit Part Network IPNetto jointly predict the outer 3D surface of the dressed person, the and inner body surface, and the semantic correspondences to a parametric body model. We subsequently use correspondences to fit the body model to our inner surface and then nonrigidly deform it under a parametric body displacement model to the outer surface in order to capture garment, face and hair detail. In quantitative and qualitative experiments with both full body data and hand scans we show that the proposed methodology generalizes, and is effective even given incomplete point clouds collected from singleview depth images. Our models and code can be downloaded from httpvirtualhumans.mpiinf.mpg.deipnet.