document_text
stringlengths
1
4.69k
The video and action classification have extremely evolved by deep neural networks specially with two stream CNN using RGB and optical flow as inputs and they present outstanding performance in terms of video analysis. One of the shortcoming of these methods is handling motion information extraction which is done out side of the CNNs and relatively time consuming also on GPUs. So proposing end-to-end methods which are exploring to learn motion representation, like 3D-CNN can achieve faster and accurate performance. We present some novel deep CNNs using 3D architecture to model actions and motion representation in an efficient way to be accurate and also as fast as real-time. Our new networks learn distinctive models to combine deep motion features into appearance model via learning optical flow features inside the network.
Information of time differentiation is extremely important cue for a motion representation. We have applied first-order differential velocity from a positional information, moreover we believe that second-order differential acceleration is also a significant feature in a motion representation. However, an acceleration image based on a typical optical flow includes motion noises. We have not employed the acceleration image because the noises are too strong to catch an effective motion feature in an image sequence. On one hand, the recent convolutional neural networks (CNN) are robust against input noises. In this paper, we employ acceleration-stream in addition to the spatial- and temporal-stream based on the two-stream CNN. We clearly show the effectiveness of adding the acceleration stream to the two-stream CNN.
In this paper we present a simple yet effective approach to extend without supervision any object proposal from static images to videos. Unlike previous methods, these spatio-temporal proposals, to which we refer as tracks, are generated relying on little or no visual content by only exploiting bounding boxes spatial correlations through time. The tracks that we obtain are likely to represent objects and are a general-purpose tool to represent meaningful video content for a wide variety of tasks. For unannotated videos, tracks can be used to discover content without any supervision. As further contribution we also propose a novel and dataset-independent method to evaluate a generic object proposal based on the entropy of a classifier output response. We experiment on two competitive datasets, namely YouTube Objects and ILSVRC-2015 VID.
The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
This paper explores the capabilities of convolutional neural networks to deal with a task that is easily manageable for humans: perceiving 3D pose of a human body from varying angles. However, in our approach, we are restricted to using a monocular vision system. For this purpose, we apply the convolutional neural networks approach on RGB videos and extend it to three dimensional convolutions. This is done via encoding the time dimension in videos as the 3rd dimension in convolutional space, and directly regressing to human body joint positions in 3D coordinate space. This research shows the ability of such a network to achieve state-of-the-art performance on the selected Human3.6M dataset, thus demonstrating the possibility of successfully representing a temporal data with an additional dimension in the convolutional operation.
Recently, convolutional networks (convnets) have proven useful for predicting optical flow. Much of this success is predicated on the availability of large datasets that require expensive and involved data acquisition and laborious labeling. To bypass these challenges, we propose an unsupervised approach (i.e., without leveraging groundtruth flow) to train a convnet end-to-end for predicting optical flow between two images. We use a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image. Together these losses form a proxy measure for losses based on the groundtruth flow. Empirically, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset.
In this paper, we propose an approach to classify action sequences. We observe that in action sequences the critical features for discriminating between actions occur only within sub-regions of the image. Hence deep network approaches will address the entire image are at a disadvantage. This motivates our strategy which uses static and spatio-temporal visual cues to isolate static and spatio-temporal regions of interest (ROIs). We then use weakly supervised learning to train deep network classifiers using the ROIs as input. More specifically, we combine multiple instance learning (MIL) with convolutional neural networks (CNNs) to select discriminative action cues. This yields classifiers for static images, using the static ROIs, as well as classifiers for short image sequences (16 frames), using spatio-temporal ROIs. Extensive experiments performed on the UCF101 and HMDB51 benchmarks show that both these types of classifiers perform well individually and achieve state of the art performance when combined together.
In autonomous driving applications a critical challenge is to identify action to take to avoid an obstacle on collision course. For example, when a heavy object is suddenly encountered it is critical to stop the vehicle or change the lane even if it causes other traffic disruptions. However, there are situations when it is preferable to collide with the object rather than take an action that would result in a much more serious accident than collision with the object. For example, a heavy object which falls from a truck should be avoided whereas a bouncing ball or a soft target such as a foam box need not be. We present a novel method to discriminate between the motion characteristics of these types of objects based on their physical properties such as bounciness, elasticity, etc. In this preliminary work, we use recurrent neural network with LSTM cells to train a classifier to classify objects based on their motion trajectories. We test the algorithm on synthetic data, and, as a proof of concept, demonstrate its effectiveness on a limited set of real-world data.
Precise localization is crucial to many computer vision tasks. Optical flow can help by providing motion boundaries which can serve as proxy for object boundaries. This paper investigates how useful these motion boundaries are in improving semantic segmentation. As there is no dataset readily available for this task, we compute the motion boundary maps with a pre-trained model from Weinzaepfel et al. (CVPR 2015) on the CamVid dataset. With these motion boundary maps and the corresponding RGB images, we train a convolutional neural network end-to-end, for the task of semantic segmentation. The experimental results show that the network has learned to incorporate the motion boundaries and that these improve the object localization.
The dominant paradigm for video-based action segmentation is composed of two steps: first, for each frame, compute low-level features using Dense Trajectories or a Convolutional Neural Network that encode spatiotemporal information locally, and second, input these features into a classifier that captures high-level temporal relationships, such as a Recurrent Neural Network (RNN). While often effective, this decoupling requires specifying two separate models, each with their own complexities, and prevents capturing more nuanced long-range spatiotemporal relationships. We propose a unified approach, as demonstrated by our Temporal Convolutional Network (TCN), that hierarchically captures relationships at low-, intermediate-, and high-level time-scales. Our model achieves superior or competitive performance using video or sensor data on three public action segmentation datasets and can be trained in a fraction of the time it takes to train an RNN.
Attention mechanism advanced state-of-the-art neural machine translation (NMT) by jointly learning to align and translate. However, attentional NMT ignores past alignment information, which leads to over-translation and under-translation problems. In response to this problem, we maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust the future attention, which guides NMT to pay more attention to the untranslated source words. Experiments show that coverage-based NMT significantly improves both translation and alignment qualities over NMT without coverage.
Hyperparameters of deep neural networks are often optimized by grid search, random search or Bayesian optimization. As an alternative, we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known for its state-of-the-art performance in derivative-free optimization. CMA-ES has some useful invariance properties and is friendly to parallel evaluations of solutions. We provide a toy usage example using CMA-ES to tune hyperparameters of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel.
It is a fundamental problem to construct accurate dense correspondences between two images. Despite the efforts and promising methods handling relatively small motion, one remaining challenge is induced by large and complex non-rigid motion. Aiming at this challenge, the new method proposed exploits the mutual boosting between the variational flow and the nearest-neighbor field (NNF). The proposed method “IVANN” gives a very effective solution under rather complex motion, and currently achieved state-of-the-art performance on both the Middlebury[3] and MPI- Sintel benchmarks[7].
This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics. Connection between these seemingly separate fields is shown by considering standard textual representation of compound, SMILES. The problem of activity prediction against a target protein is considered, which is a crucial part of computer aided drug design process. Conducted experiments show that this way one can not only outrank state of the art results of hand crafted representations but also gets direct structural insights into the way decisions are made.
We designed a contextual filtering algorithm for improving the quality of image segmentation. The algorithm was applied on the task of building the Membrane Detection Probability Maps (MDPM) for segmenting electron microscopy (EM) images of brain tissues. To achieve this, we executed supervised training of a convolutional neural network to recover the ground-truth label of the masked-out center pixel from patches sampled from an un-refined MDPM. Through this training process the model learns the distribution of the segmentation ground-truth map . By applying this trained network over MDPMs we are able to integrate contextual information to obtain with better spatial consistency in the high-level representation space. By iteratively applying this network over the MDPMs for multiple rounds, we were able to significantly improve the EM image segmentation results.
Convolutional Neural Network (CNN) has led to great advances in computer vision. Various customized CNN accelerators on embedded FPGA or ASIC platforms have been designed to accelerate CNN and improve energy efficiency. However, the odd-number filter size in existing CNN models prevents hardware accelerators from having optimal efficiency. In this paper, we analyze the influences of filter size on CNN accelerator performance and show that even-number filter size is much more hardware-friendly that can ensure high bandwidth and resource utilization. Experimental results on MNIST and CIFAR-10 demonstrate that hardware-friendly even kernel CNNs can reduce the FLOPs by 1.4x to 2x with comparable accuracy; With same FLOPs, even kernel can have even higher accuracy than odd size kernel.
Recurrent neural networks (RNNs) have been shown to be very effective for many sequential prediction problems such as speech recognition, machine translation, part-of-speech tagging, and others. The best variant is typically a bidirectional RNN that learns representation for a sequence by performing a forward and a backward pass through the entire sequence. However, unlike unidirectional RNNs, bidirectional RNNs are challenging to deploy in an online and low-latency setting (e.g., in a speech recognition system), because they need to see an entire sequence before making a prediction. We introduce a lookahead convolution layer that incorporates information from future subsequences in a computationally efficient manner to improve unidirectional recurrent neural networks. We evaluate our method on speech recognition tasks for two languages---English and Chinese. Our experiments show that the proposed method outperforms vanilla unidirectional RNNs and is competitive with bidirectional RNNs in terms of character and word error rates.
In Bayesian approach to probabilistic modeling of data we select a model for probabilities of data that depends on a continuous vector of parameters. For a given data set Bayesian theorem gives a probability distribution of the model parameters. Then the inference of outcomes and probabilities of new data could be found by averaging over the parameter distribution of the model, which is an intractable problem. In this paper we propose to use Variational Bayes (VB) to estimate Gaussian posterior of model parameters for a given Gaussian prior and Bayesian updates in a form that resembles SGD rules. It is shown that with incremental updates of posteriors for a selected sequence of data points and a given number of iterations the variational approximations are defined by a trajectory in space of Gaussian parameters, which depends on a starting point defined by priors of the parameter distribution, which are true hyper-parameters. The same priors are providing a weight decay or L2 regularization for the training. Then a selection of L2 regularization parameters and a number of iterations is completely defining a learning rule for VB SGD optimization, unlike other methods with momentum (Duchi et al., 2011; Kingma & Ba, 2014; Zeiler, 2012) that need selecting learning, regularization rates, etc., separately. We consider application of VB SGD for important practical case of fast training neural networks with very large data. While the speedup is achieved by partitioning data and training in parallel the resulting set of solutions obtained with VB SGD forms a Gaussian mixture. By applying VB SGD optimization to the Gaussian mixture we can merge multiple neural networks of same dimensions into a new single neural network that has almost the same performance as an original Gaussian mixture.
In this work we present two examples of how a manifold learning model can represent the complexity of shape variation in images. Manifold learning techniques for image manifolds can be used to model data in sparse manifold regions. Additionally, they can be used as generative models as they can often better represent or learn structure in the data. We propose a method of estimating the underlying manifold using the ridges of a kernel density estimate as well as tangent space operations that allows interpolation between images along the manifold and offers a novel approach to analyzing the image manifold.
Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image. These bounding boxes are highly correlated since they originate from the same image. In this paper we investigate how to exploit feature occurrence at the image scale to prune the neural network which is subsequently applied to all bounding boxes. We show that removing units which have near-zero activation in the image allows us to significantly reduce the number of parameters in the network. Results on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of units in some fully-connected layers can be entirely eliminated with little change in the detection result.
The soundness of training data is important to the performance of a learning model. However in recommender systems, the training data are usually noisy, because of the randomness nature of users' behaviors and the sparseness of the users' feedback towards the recommendations. In this work, we would like to propose a noise elimination model to preprocess the training data in recommender systems. We define the noise as the abnormal patterns in the users' feedback. The proposed deep dictionary learning model tries to find the common patterns through dictionary learning. We define a dictionary through the output layer of a stacked autoencoder, so that the dictionary is represented by a deep structure and the noise in the dictionary is further filtered-out.
Energy-based probabilistic models have been confronted with intractable computations during the learning that requires to have appropriate samples drawn from the estimated probability distribution. It can be approximately achieved by a Monte Carlo Markov Chain sampling process, but still has mixing problems especially with deep models that slow the learning. We introduce an auxiliary deep model that deterministically generates samples based on the estimated distribution, and this makes the learning easier without any high cost sampling process. As a result, we propose a new framework to train the energy-based probabilistic models with two separate deep feed-forward models. The one is only to estimate the energy function, and the another is to deterministically generate samples based on it. Consequently, we can estimate the probability distribution and its corresponding deterministic generator with deep models.
In machine learning, there is a fundamental trade-off between ease of optimization and expressive power. Neural Networks, in particular, have enormous expressive power and yet are notoriously challenging to train. The nature of that optimization challenge changes over the course of learning. Traditionally in deep learning, one makes a static trade-off between the needs of early and late optimization. In this paper, we investigate a novel framework, GradNets, for dynamically adapting architectures during training to get the benefits of both. For example, we can gradually transition from linear to non-linear networks, deterministic to stochastic computation, shallow to deep architectures, or even simple downsampling to fully differentiable attention mechanisms. Benefits include increased accuracy, easier convergence with more complex architectures, solutions to test-time execution of batch normalization, and the ability to train networks of up to 200 layers.
This paper introduces a new class of neural networks that we refer to as input-convex neural networks, networks that are convex in their inputs (as opposed to their parameters). We discuss the nature and representational power of these networks, illustrate how the prediction (inference) problem can be solved via convex optimization, and discuss their application to structured prediction problems. We highlight a few simple examples of these networks applied to classification tasks, where we illustrate that the networks perform substantially better than any other approximator we are aware of that is convex in its inputs.
Yes, apparently they do. Previous research by Ba and Caruana (2014) demonstrated that shallow feed-forward nets sometimes can learn the complex functions pre- viously learned by deep nets while using a simi- lar number of parameters as the deep models they mimic. In this paper we investigate if shallow models can learn to mimic the functions learned by deep convolutional models. We experiment with shallow models and models with a vary- ing number of convolutional layers, all trained to mimic a state-of-the-art ensemble of CIFAR-10 models. We demonstrate that we are unable to train shallow models to be of comparable accu- racy to deep convolutional models. Although the student models do not have to be as deep as the teacher models they mimic, the student models apparently need multiple convolutional layers to learn functions of comparable accuracy.
In recent years increasingly complex architectures for deep convolution networks (DCNs) have been proposed to boost the performance on image recognition tasks. However, the gains in performance have come at a cost of substantial increase in computation and model storage resources. Fixed point implementation of DCNs has the potential to alleviate some of these complexities and facilitate potential deployment on embedded hardware. In this paper, we formulate and solve an optimization problem to identify the optimal fixed point bit-width allocation across layers to enable efficient fixed point implementation of DCNs. Our experiments show that in comparison to equal bit-width settings, optimized bit-width allocation offers >20% reduction in model size without any loss in accuracy on CIFAR-10 benchmark. We also demonstrate that fine-tuning can further enhance the accuracy of fixed point DCNs beyond that of the original floating point model. In doing so, we report a new state-of-the-art fixed point performance of 6.78% error-rate on CIFAR-10 benchmark.
Educational Data Mining is an area of growing interest, given the increase of available data and generalization of online learning environments. In this paper we present a first approach to integrating Representation Learning techniques in Educational Data Mining by adding autoencoders as a preprocessing step in a standard performance prediction problem. Preliminary results do not show an improvement in performance by using autoencoders, but we expect that a fine tuning of parameters will provide an improvement. Also, we expect that autoencoders will be more useful combined with different kinds of classifiers, like multilayer perceptrons.
Bag-of-ngram based methods still achieve state-of-the-art results for tasks such as sentiment classification of long movie reviews, though semantic information is partially lost for these methods. Many document embeddings methods have been proposed to capture semantics, but they still can't outperform bag-of-ngram based methods on this task. In this paper, we modify the architecture of the recently proposed Paragraph Vector, allowing it to learn document vectors by predicting not only words, but n-gram features as well. Our model is able to capture both semantics and word order in documents while keeping the expressive power of learned vectors. Experimental results on IMDB movie review dataset show that our model outperforms previous deep learning models and bag-of-ngram based models due to the above advantages.
We propose a regularization framework where we feed an original clean data point and a nearby point through a mapping, which is then penalized by the Euclidian distance between the corresponding outputs. The nearby point may be chosen randomly or adversarially. We relate this framework to many existing regularization methods: It is a stochastic estimate of penalizing the Frobenius norm of the Jacobian of the mapping as in Poggio & Girosi (1990), it generalizes noise regularization (Sietsma & Dow, 1991), and it is a simplification of the canonical regularization term by the ladder networks in Rasmus et al. (2015). We also study the connection to virtual adversarial training (VAT) (Miyato et al., 2016) and show how VAT can be interpreted as penalizing the largest eigenvalue of a Fisher information matrix. Our main contribution is discovering connections between the proposed and existing regularization methods.
Though with progress, model learning and performing posterior inference still re- mains a common challenge for using deep generative models, especially for han- dling discrete hidden variables. This paper is mainly concerned with algorithms for learning Helmholz machines, which is characterized by pairing the genera- tive model with an auxiliary inference model. A common drawback of previous learning algorithms is that they indirectly optimize some bounds of the targeted marginal log-likelihood. In contrast, we successfully develop a new class of al- gorithms, based on stochastic approximation (SA) theory of the Robbins-Monro type, to directly optimize the marginal log-likelihood and simultaneously mini- mize the inclusive KL-divergence. The resulting learning algorithm is thus called joint SA (JSA). Moreover, we construct an effective MCMC operator for JSA. Our results on the MNIST datasets demonstrate that the JSA’s performance is consis- tently superior to that of competing algorithms like RWS, for learning a range of difficult models.
We propose variational bounds on the log-likelihood of an undirected probabilistic graphical model p that are parametrized by flexible approximating distributions q. These bounds are tight when q = p, are convex in the parameters of q for interesting classes of q, and may be further parametrized by an arbitrarily complex neural network. When optimized jointly over q and p, our bounds enable us to accurately track the partition function during learning.
In this work we study variance in the results of neural network training on a wide variety of configurations in automatic speech recognition. Although this variance itself is well known, this is, to the best of our knowledge, the first paper that performs an extensive empirical study on its effects in speech recognition. We view training as sampling from a distribution and show that these distributions can have a substantial variance. These observations have important implications on way results in the literature are reported and interpreted.
Approximate variational inference has shown to be a powerful tool for modeling unknown, complex probability distributions. Recent advances in the field allow us to learn probabilistic sequence models. We apply a Stochastic Recurrent Network (STORN) to learn robot time series data. Our evaluation demonstrates that we can robustly detect anomalies both off- and on-line.
We propose a simple and practical method for improving the flexibility of the approximate posterior in variational auto-encoders (VAEs) through a transformation with autoregressive networks. Autoregressive networks, such as RNNs and RNADE networks, are very powerful models. However, their sequential nature makes them impractical for direct use with VAEs, as sequentially sampling the latent variables is slow when implemented on a GPU. Fortunately, we find that by inverting autoregressive networks we can obtain equally powerful data transformations that can be computed in parallel. We call these data transformations inverse autoregressive flows (IAF), and we show that they can be used to transform a simple distribution over the latent variables into a much more flexible distribution, while still allowing us to compute the resulting variables' probability density function. The method is computationally cheap, can be made arbitrarily flexible, and (in contrast with previous work) is naturally applicable to latent variables that are organized in multidimensional tensors, such as 2D grids or time series. The method is applied to a novel deep architecture of variational auto-encoders. In experiments we demonstrate that autoregressive flow leads to significant performance gains when applied to variational autoencoders for natural images.
Unsupervised learning on imbalanced data is challenging because, when given imbalanced data, current model is often dominated by the major category and ignores the categories with small amount of data. We develop a latent variable model that can cope with imbalanced data by dividing the latent space into a shared space and a private space. Based on Gaussian Process Latent Variable Models, we propose a new kernel formulation that enables the separation of latent space and derive an efficient variational inference method. The performance of our model is demonstrated with an imbalanced medical image dataset.
Two large-scale cloze-style context-question-answer datasets have been introduced recently: i) the CNN and Daily Mail news data and ii) the Children's Book Test. Thanks to the size of these datasets, the associated task is well suited for deep-learning techniques that seem to outperform all alternative approaches. We present a new, simple model that is tailor made for such question-answering problems. Our model directly sums attention over candidate answer words in the document instead of using it to compute weighted sum of word embeddings. Our model outperforms models previously proposed for these tasks by a large margin.
The recent success of deep learning approaches for domains like speech recognition (Hinton et al., 2012) and computer vision (Ioffe & Szegedy, 2015) stems from many algorithmic improvements but also from the fact that the size of available training data has grown significantly over the years, together with the computing power, in terms of both CPUs and GPUs. While a single GPU often provides algorithmic simplicity and speed up to a given scale of data and model, there exist an operating point where a distributed implementation of training algorithms for deep architectures becomes necessary. Previous works have been focusing on asynchronous SGD training, which works well up to a few dozens of workers for some models. In this work, we show that synchronous SGD training, with the help of backup workers, can not only achieve better accuracy, but also reach convergence faster with respect to wall time, i.e. use more workers more efficiently.
ResNets have recently achieved state-of-the-art results on challenging computer vision tasks. In this paper, we create a novel architecture that improves ResNets by adding the ability to forget and by making the residuals more expressive, yielding excellent results. ResNet in ResNet outperforms architectures with similar amounts of augmentation on CIFAR-10 and establishes a new state-of-the-art on CIFAR-100.
In this work, we cast text summarization as a sequence-to-sequence problem and apply the attentional encoder-decoder RNN that has been shown to be successful for Machine Translation. Our experiments show that the proposed architecture significantly outperforms the state-of-the art model of Rush et. al. (2015), on the Gigaword dataset without any additional tuning. We also propose additional extensions to the standard architecture, which we show contribute to further improvement in performance.
We introduce a neural network architecture and a learning algorithm to produce fac- torized symbolic representations. We propose to learn these concepts by observing consecutive frames, letting all the components of the hidden representation except a small discrete set (gating units) be predicted from previous frame, and let the factors of variation in the next frame be represented entirely by these discrete gated units (corresponding to symbolic representations). We demonstrate the efficacy of our approach on datasets of faces undergoing 3D transformations and Atari 2600 games.
Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip. Recently, there have been major advances for doing object detection in a single image. These methods typically contain three phases: (i) object proposal generation (ii) object classification and (iii) post-processing. We propose a modification of the post-processing phase that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip. We show that our method obtains superior results to state-of-the-art single image object detection techniques. Our method placed $3^{rd}$ in the video object detection (VID) task of the ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015).
We introduced a new metric for comparing adversarial networks quantitatively.
One of the difficulties of training deep neural networks is caused by improper scaling between layers. These scaling issues introduce exploding / gradient problems, and have typically been addressed by careful variance-preserving initialization. We consider this problem as one of preserving scale, rather than preserving variance. This leads to a simple method of scale-normalizing weight layers, which ensures that scale is approximately maintained between layers. Our method of scale-preservation ensures that forward propagation is impacted minimally, while backward passes maintain gradient scales. Preliminary experiments show that scale normalization effectively speeds up learning, without introducing additional hyperparameters or parameters.
Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly, however, when fully trained, the final quality of the non-residual Inception variants seem to be close to those of residual versions. We present several new streamlined architectures for both residual and non-residual Inception networks. With an ensemble of three residual and one pure Inception-v4, we achieve 3.08\% top-5 error on the test set of the ImageNet classification (CLS) challenge
Many real-world time series involve repeated patterns that evolve gradually by following slow underlying trends. The evolution of relevant features prevents conventional learning methods from extracting representations that separate differing patterns while being consistent over the whole time series. Here, we present an unsupervised learning method to finding representations that are consistent over time and which separate patterns in non-stationary time-series. We developed an on-line version of t-Distributed Stochastic Neighbor Embedding (t-SNE). We apply t-SNE to the time series iteratively on a running window, and for each displacement of the window, we choose as the seed of the next embedding the final positions of the points obtained in the previous embedding. This process ensures consistency of the representation of slowly evolving patterns, while ensuring that the embedding at each step is optimally adapted to the current window. We apply this method to the song of the developing zebra finch, and we show that we are able to track multiple distinct syllables that are slowly emerging over multiple days, from babbling to the adult song stage.
This paper introduces a framework for mapping Recurrent Neural Network (RNN) architectures efficiently onto parallel processors such as GPUs. Key to our ap- proach is the use of persistent computational kernels that exploit the processor’s memory hierarchy to reuse network weights over multiple timesteps. Using our framework, we show how it is possible to achieve substantially higher computa- tional throughput at lower mini-batch sizes than direct implementations of RNNs based on matrix multiplications. Our initial implementation achieves 2.8 TFLOP/s at a mini-batch size of 4 on an NVIDIA TitanX GPU, which is about 45% of the- oretical peak throughput, and is 30X faster than a standard RNN implementation based on optimized GEMM kernels at this batch size. Reducing the batch size from 64 to 4 per processor provides a 16x reduction in activation memory foot- print, enables strong scaling to 16x more GPUs using data-parallelism, and allows us to efficiently explore end-to-end speech recognition models with up to 108 residual RNN layers.
Materials that exhibit high strength-to-weight ratio, a desirable property for aerospace applications, often present unique inspection challenges. Nondestructive evaluation (NDE) addresses these challenges by utilizing methods, such as x-ray computed tomography (CT), that can capture the internal structure of a material without causing changes to the material. Analyzing the data captured by these methods requires a significant amount of expertise and is costly. Since the data captured by NDE techniques often is structured as images, deep learning can be used to automate initial analysis. This work looks to automate part of this initial analysis by applying the efficient encoder-decoder convolutional network at multiple scales to perform identification and segmentation of defects for NDE.
Convolutional neural networks are sensitive to the random initialization of filters. We call this The Filter Lottery (TFL) because the random numbers used to initialize the network determine if you will ``win'' and converge to a satisfactory local minimum. This issue forces networks to contain more filters (be wider) to achieve higher accuracy because they have better odds of being transformed into highly discriminative features at the risk of introducing redundant features. To deal with this, we propose to evaluate and replace specific convolutional filters that have little impact on the prediction. We use the gradient norm to evaluate the impact of a filter on error, and re-initialize filters when the gradient norm of its weights falls below a specific threshold. This consistently improves accuracy across two datasets by up to 1.8%. Our scheme RandomOut allows us to increase the number of filters explored without increasing the size of the network. This yields more compact networks which can train and predict with less computation, thus allowing more powerful CNNs to run on mobile devices.
Distance weighted discrimination (DWD) was originally proposed to handle the data piling issue in the support vector machine. In this paper, we consider the sparse penalized DWD for high-dimensional classification. The state-of-the-art algorithm for solving the standard DWD is based on second-order cone programming, however such an algorithm does not work well for the sparse penalized DWD with high-dimensional data. In order to overcome the challenging computation difficulty, we develop a very efficient algorithm to compute the solution path of the sparse DWD at a given fine grid of regularization parameters. We implement the algorithm in a publicly available R package sdwd. We conduct extensive numerical experiments to demonstrate the computational efficiency and classification performance of our method.
We present a method for unsupervised open-domain relation discovery. In contrast to previous (mostly generative and agglomerative clustering) approaches, our model relies on rich contextual features and makes minimal independence assumptions. The model is composed of two parts: a feature-rich relation extractor, which predicts a semantic relation between two entities, and a factorization model, which reconstructs arguments (i.e., the entities) relying on the predicted relation. We use a variational autoencoding objective and estimate the two components jointly so as to minimize errors in recovering arguments. We study factorization models inspired by previous work in relation factorization. Our models substantially outperform the generative and agglomerative-clustering counterparts and achieve state-of-the-art performance.
Learning efficient representations for concepts has been proven to be an important basis for many applications such as machine translation or document classification. Proper representations of medical concepts such as diagnosis, medication, procedure codes and visits will have broad applications in healthcare analytics. However, in Electronic Health Records (EHR) the visit sequences of patients include multiple concepts (diagnosis, procedure, and medication codes) per visit. This structure provides two types of relational information, namely sequential order of visits and co-occurrence of the codes within each visit. In this work, we propose Med2Vec, which not only learns distributed representations for both medical codes and visits from a large EHR dataset with over 3 million visits, but also allows us to interpret the learned representations confirmed positively by clinical experts. In the experiments, Med2Vec displays significant improvement in key medical applications compared to popular baselines such as Skip-gram, GloVe and stacked autoencoder, while providing clinically meaningful interpretation.
Deep neural networks (DNNs) have achieved remarkable success on complex data processing tasks. In contrast to biological neural systems, capable of learning continuously, DNNs have a limited ability to incorporate new information in a trained network. Therefore, methods for continuous learning are potentially highly impactful in enabling the application of DNNs to dynamic data sets. Inspired by adult neurogenesis in the hippocampus, we explore the potential for adding new nodes to layers of artificial neural networks to facilitate their acquisition of novel information while preserving previously trained data representations. Our results demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
In this paper, we revise two commonly used saturated functions, the logistic sigmoid and the hyperbolic tangent (tanh). We point out that, besides the well-known non-zero centered property, slope of the activation function near the origin is another possible reason making training deep networks with the logistic function difficult to train. We demonstrate that, with proper rescaling, the logistic sigmoid achieves comparable results with tanh. Then following the same argument, we improve tahn by penalizing in the negative part. We show that ``penalized tanh'' is comparable and even outperforms the state-of-the-art non-saturated functions including ReLU and leaky ReLU on deep convolution neural networks. Our results contradict to the conclusion of previous works that the saturation property causes the slow convergence. It suggests further investigation is necessary to better understand activation functions in deep architectures.
We describe a neural network model in which the tiling of the input array is learned by performing a joint localization and classification task. After training, the optimal tiling that emerges resembles the eccentricity dependent tiling of the human retina.
We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech–two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, enabling experiments that previously took weeks to now run in days. This allows us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale
In this work we present a novel approach for the utilization of observed relations between entity pairs in the task of triple argument prediction. The approach is based on representing observations in a shared, continuous vector space of structured relations and text. Results on a recent benchmark dataset demonstrate that the new model is superior to existing sparse feature models. In combination with state-of-the-art models, we achieve substantial improvements when observed relations are available.
Genomics are rapidly transforming medical practice and basic biomedical research, providing insights into disease mechanisms and improving therapeutic strategies, particularly in cancer. The ability to predict the future course of a patient's disease from high-dimensional genomic profiling will be essential in realizing the promise of genomic medicine, but presents significant challenges for state-of-the-art survival analysis methods. In this abstract we present an investigation in learning genomic representations with neural networks to predict patient survival in cancer. We demonstrate the advantages of this approach over existing survival analysis methods using brain tumor data.
Deep learning researchers commonly suggest that converged models are stuck in local minima. More recently, some researchers observed that under reasonable assumptions, the vast majority of critical points are saddle points, not true minima. Both descriptions suggest that weights converge around a point in weight space, be it a local optima or merely a critical point. However, it's possible that neither interpretation is accurate. As neural networks are typically over-complete, it's easy to show the existence of vast continuous regions through weight space with equal loss. In this paper, we build on recent work empirically characterizing the error surfaces of neural networks. We analyze training paths through weight space, presenting evidence that apparent convergence of loss does not correspond to weights arriving at critical points, but instead to large movements through flat regions of weight space. While it's trivial to show that neural network error surfaces are globally non-convex, we show that error surfaces are also locally non-convex, even after breaking symmetry with a random initialization and also after partial training.
Biclustering is evolving into one of the major tools for analyzing large datasets given as matrix of samples times features. Biclustering has been successfully applied in life sciences, e.g. for drug design, in e-commerce, e.g. for internet retailing or recommender systems. FABIA is one of the most successful biclustering methods which excelled in different projects and is used by companies like Janssen, Bayer, or Zalando. FABIA is a generative model that represents each bicluster by two sparse membership vectors: one for the samples and one for the features. However, FABIA is restricted to about 20 code units because of the high computational complexity of computing the posterior. Furthermore, code units are sometimes insufficiently decorrelated. Sample membership is difficult to determine because vectors do not have exact zero entries and can have both large positive and large negative values. We propose to use the recently introduced unsupervised Deep Learning approach Rectified Factor Networks (RFNs) to overcome the drawbacks of FABIA. RFNs efficiently construct very sparse, non-linear, high-dimensional representations of the input via their posterior means. RFN learning is a generalized alternating minimization algorithm based on the posterior regularization method which enforces non-negative and normalized posterior means. Each code unit represents a bicluster, where samples for which the code unit is active belong to the bicluster and features that have activating weights to the code unit belong to the bicluster. On 400 benchmark datasets with artificially implanted biclusters, RFN significantly outperformed 13 other biclustering competitors including FABIA. In biclustering experiments on three gene expression datasets with known clusters that were determined by separate measurements, RFN biclustering was two times significantly better than the other 13 methods and once on second place.
We explore a new architecture for representing spatial information in neural networks. The method binds object information to position via element-wise multiplication of complex-valued vectors. This approach extends Holographic Reduced Representations by providing additional tools for processing and manipulating spatial information. In many cases these computations can be performed very efficiently through application of the convolution theorem. Experiments demonstrate excellent performance on a visuo-spatial reasoning task as well as on a 2D maze navigation task.
Recent work in learning vector-space embeddings for multi-relational data has focused on combining relational information derived from knowledge bases with distributional information derived from large text corpora. We propose a simple trick that leverages the descriptions of entities or phrases available in lexical resources, in conjunction with distributional semantics, in order to derive a better initialization for training relational models. Applying this trick to the TransE model results in faster convergence of the entity representations, and achieves small improvements on Freebase for raw mean rank. More surprisingly, it results in significant new state-of-the-art performances on the WordNet dataset, decreasing the mean rank from the previous best 212 to 51. We find that there is a trade-off between improving the mean rank and the hits@10 with this approach. This illustrates that much remains to be understood regarding performance improvements in relational models.
Recurrent neural networks are convenient and efficient models for learning patterns in sequential data. However, when applied to signals with very low cardinality such as character-level language modeling, they suffer from several problems. In order to success- fully model longer-term dependencies, the hidden layer needs to be large, which in turn implies high computational cost. Moreover, the accuracy of these models is significantly lower than that of baseline word-level models. We propose two structural modifications of the classic RNN LM architecture. The first one consists on conditioning the RNN both on the character-level and word-level information. The other one uses the recent history to condition the computation of the output probability. We evaluate the performance of the two proposed modifications on multi-lingual data. The experiments show that both modifications can improve upon the basic RNN architecture, which is even more visible in cases when the input and output signals are represented by single bits. These findings suggest that more research needs to be done to develop general RNN architecture that would perform optimally across wide range of tasks.
Sum-Product Networks (SPNs) are a class of expressive yet tractable hierarchical graphical models. LearnSPN is a structure learning algorithm for SPNs that uses hierarchical co-clustering to simultaneously identifying similar entities and similar features. The original LearnSPN algorithm assumes that all the variables are discrete and there is no missing data. We introduce a practical, simplified version of LearnSPN, MiniSPN, that runs faster and can handle missing data and heterogeneous features common in real applications. We demonstrate the performance of MiniSPN on standard benchmark datasets and on two datasets from Google's Knowledge Graph exhibiting high missingness rates and a mix of discrete and continuous features.
Despite the success of very deep convolutional neural networks, they currently operate at very low resolutions relative to modern cameras. Visual attention mechanisms address this by allowing models to access higher resolutions only when necessary. However, in certain cases, this higher resolution isn’t available. We show that autoresolution networks, which learn correspondences between lowresolution and high-resolution images, learn representations that improve lowresolution classification - without needing labeled high-resolution images.
This work aims to improve upon the recently proposed and rapidly popular- ized optimization algorithm Adam (Kingma & Ba, 2014). Adam has two main components—a momentum component and an adaptive learning rate component. However, regular momentum can be shown conceptually and empirically to be in- ferior to a similar algorithm known as Nesterov’s accelerated gradient (NAG). We show how to modify Adam’s momentum component to take advantage of insights from NAG, and then we present preliminary evidence suggesting that making this substitution improves the speed of convergence and the quality of the learned mod- els.
We show that by employing a distribution over random matrices, the matrix variate Gaussian~\cite{gupta1999matrix}, for the neural network parameters we can obtain a non-parametric interpretation for the hidden units after the application of the ``local reprarametrization trick"~\citep{kingma2015variational}. This provides a nice duality between Bayesian neural networks and deep Gaussian Processes~\cite{damianou2012deep}, a property that was also shown by~\cite{gal2015dropout}. We show that we can borrow ideas from the Gaussian Process literature so as to exploit the non-parametric properties of such a model. We empirically verified this model on a regression task.
The natural gradient is a powerful method to improve the transient dynamics of learning by considering the geometric structure of the parameter space. Many natural gradient methods have been developed with regards to Kullback-Leibler (KL) divergence and its Fisher metric, but the framework of natural gradient can be essentially extended to other divergences. In this study, we focus on score matching, which is an alternative to maximum likelihood learning for unnormalized statistical models, and introduce its Riemannian metric. By using the score matching metric, we derive an adaptive natural gradient algorithm that does not require computationally demanding inversion of the metric. Experimental results in a multi-layer neural network model demonstrate that the proposed method avoids the plateau phenomenon and accelerates the convergence of learning compared to the conventional stochastic gradient descent method.
We propose Neural Enquirer — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, Neural Enquirer is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both end-to-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.
In this work, we present a Monte Carlo tree search-based program for playing Go which uses convolutional rollouts. Our method performs MCTS in batches, explores the Monte Carlo tree using Thompson sampling and a convolutional policy network, and evaluates convnet-based rollouts on the GPU. We achieve strong win rates against an open source Go program and attain competitive results against state of the art convolutional net-based Go-playing programs.
This paper presents an end-to-end neural network model, named Neural Generative Question Answering (genQA), that can generate answers to simple factoid questions, both in natural language. More specifically, the model is built on the encoder-decoder framework for sequence-to-sequence learning, while equipped with the ability to access an embedded knowledge-base through an attention-like mechanism. The model is trained on a corpus of question-answer pairs, with their associated triples in the given knowledge-base. Empirical study shows the proposed model can effectively deal with the language variation of the question and generate a right answer by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform the embedding-based QA model as well as the neural dialogue models trained on the same data.
This paper applies a deep convolutional/highway MLP framework to classify genomic sequences on the transcription factor binding site task. To make the model understandable, we propose an optimization driven strategy to extract “motifs”, or symbolic patterns which visualize the positive class learned by the network. We show that our system, Deep Motif (DeMo), extracts motifs that are similar to, and in some cases outperform the current well known motifs. In addition, we find that a deeper model consisting of multiple convolutional and highway layers can outperform a single convolutional and fully connected layer in the previous state-of-the-art.
Understanding the generalization properties of deep learning models is critical for successful applications, especially in the regimes where the number of training samples is limited. We study the generalization properties of deep neural networks via the empirical Rademacher complexity and show that it is easier to control the complexity of convolutional networks compared to general fully connected networks. In particular, we justify the usage of small convolutional kernels in deep networks as they lead to a better generalization error. Moreover, we propose a representation based regularization method that allows to decrease the generalization error by controlling the coherence of the representation. Experiments on the MNIST dataset support these foundations.
We propose a simplified model of attention which is applicable to feed-forward neural networks and demonstrate that the resulting model can solve the synthetic "addition" and "multiplication" long-term memory problems for sequence lengths which are both longer and more widely varying than the best published results for these tasks.
Existing approaches to combine both additive and multiplicative neural units either use a fixed assignment of operations or require discrete optimization to determine what function a neuron should perform. However, this leads to an extensive increase in the computational complexity of the training procedure. We present a novel, parameterizable transfer function based on the mathematical concept of non-integer functional iteration that allows the operation each neuron performs to be smoothly and, most importantly, differentiablely adjusted between addition and multiplication. This allows the decision between addition and multiplication to be integrated into the standard backpropagation training procedure.
High computational complexity hinders the widespread usage of Convolutional Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are arguably the most promising approach for reducing both execution time and power consumption. One of the most important steps in accelerator development is hardware-oriented model approximation. In this paper we present Ristretto, a model approximation framework that analyzes a given CNN with respect to numerical resolution used in representing weights and outputs of convolutional and fully connected layers. Ristretto can condense models by using fixed point arithmetic and representation instead of floating point. Moreover, Ristretto fine-tunes the resulting fixed point network. Given a maximum error tolerance of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit. The code for Ristretto is available.
Recently, very deep neural networks set new records across many application domains, like Residual Networks at the ImageNet challenge and Highway Networks at language processing tasks. We expect further excellent performance improvements in different fields from these very deep networks. However these networks are still poorly understood, especially since they rely on non-standard architectures. In this contribution we analyze the learning dynamics which are required for successfully training very deep neural networks. For the analysis we use a symplectic network architecture which inherently conserves volume when mapping a representation from one to the next layer. Therefore it avoids the vanishing gradient problem, which in turn allows to effectively train thousands of layers. We consider highway and residual networks as well as the LSTM model, all of which have approximately volume conserving mappings. We identified two important factors for making deep architectures working: (1) (near) volume conserving mappings through $x = x + f(x)$ or similar (cf.\ avoiding the vanishing gradient); (2) Controlling the drift effect, which increases/decreases $x$ during propagation toward the output (cf.\ avoiding bias shifts);
External memory has been proven to be essential for the success of neural network-based systems on many tasks, including Question-Answering, classification, machine translation and reasoning. In all those models the memory is used to store instance representations of multiple levels, analogous to “data” in the Von Neumann architecture of a computer, while the “instructions” are stored in the weights. In this paper, we however propose to use the memory for storing part of the instructions, and more specifically, the transformation rules in sequence-to-sequence learning tasks, in an external memory attached to a neural system. This memory can be accessed both by the neural network and by the human experts, hence serving as an interface for a novel learning paradigm where not only the instances but also the rule can be taught to the neural network. Our empirical study on a synthetic but challenging dataset verifies that our model is effective.
Recent studies have shown that Convolutional Neural Networks (CNNs) are vulnerable to a small perturbation of input called "adversarial examples". In this work, we propose a new feedforward CNN that improves robustness in the presence of adversarial noise. Our model uses stochastic additive noise added to the input image and to the CNN models. The proposed model operates in conjunction with a CNN trained with either standard or adversarial objective function. In particular, convolution, max-pooling, and ReLU layers are modified to benefit from the noise model. Our feedforward model is parameterized by only a mean and variance per pixel which simplifies computations and makes our method scalable to a deep architecture. From CIFAR-10 and ImageNet test, the proposed model outperforms other methods and the improvement is more evident for difficult classification tasks or stronger adversarial noise.
Large amount of Electronic Health Record (EHR) data have been collected over millions of patients over multiple years. The rich longitudinal EHR data documented the collective experiences of physicians including diagnosis, medication prescription and procedures. We argue it is possible now to leverage the EHR data to model how physicians behave, and we call our model Doctor AI. Towards this direction of modeling clinical behavior of physicians, we develop a successful application of Recurrent Neural Networks (RNN) to jointly forecast the future disease diagnosis and medication prescription along with their timing. Unlike traditional classification models where a single target is of interest, our model can assess the entire history of patients and make continuous and multilabel predictions based on patients' historical data. We evaluate the performance of the proposed method on a large real-world EHR data over 260K patients over 8 years. We observed Doctor AI can perform differential diagnosis with similar accuracy to physicians. In particular, Doctor AI achieves up to 79% recall@30, significantly higher than several baselines. Moreover, we demonstrate great generalizability of Doctor AI by applying the resulting models on data from a completely different medication institution achieving comparable performance.
The flood of multi-context measurement data from many scientific domains has created an urgent need to reconstruct context-specific variable networks, that could significantly simplify network-driven studies. Computationally, this problem can be formulated as jointly estimating multiple different, but related, sparse Undirected Graphical Models (UGM) from samples aggregated across several contexts. Previous joint-UGM studies could not address this challenge since they mostly focus on Gaussian Graphical Models (GGM) and have used likelihood-based formulation to infer multiple graphs toward a common pattern. Differently, we propose a novel approach, SIMULE (learning Shared and Individual parts of MULtiple graphs Explicitly) to solve multi-task UGM using a $\ell$1 constrained optimization. SIMULE is cast as independent subproblems of linear programming that can be solved efficiently. It automatically infers specific dependencies that are unique to each context as well as shared substructures preserved among all the contexts. SIMULE can handle both multivariate Gaussian and multivariate Nonparanormal data that greatly relax the normality assumption. Theoretically we prove that SIMULE achieves a consistent result at rate $O(\log(Kp)/n_{tot})$ (not been proved before). On four synthetic datasets, SIMULE shows significant improvements over state-of-the-art multi-sGGM and single-UGM baselines.
We introduce the recurrent tensor network, a recurrent neural network model that replaces the matrix-vector multiplications of a standard recurrent neural network with bilinear tensor products. We compare its performance against networks that employ long short-term memory (LSTM) networks. Our results demonstrate that using tensors to capture the interactions between network inputs and history can lead to substantial improvement in predictive performance on the language modeling task.
This paper proposes a method to obtain more distinct class saliency maps than Simonyan et al. (2014). We made three improvements over their method: (1) using CNN derivatives with respect to feature maps of the intermediate convolutional layers with up-sampling instead of an input image; (2) subtracting saliency maps of the other classes from saliency maps of the target class to differentiate target objects from other objects; (3) aggregating multi-scale class saliency maps to compensate lower resolution of the feature maps.
Tracking particles in a collider is a challenging problem due to collisions, imperfections in sensors and the nonlinear trajectories of particles in a magnetic field. Presently, the algorithms employed to track particles are best suited to capture linear dynamics. We believe that incremental optimization of current LHC (Large Halidron collider) tracking algorithms has reached the point of diminishing returns. These algorithms will not be able to cope with the 10-100x increase in HL-LHC (high luminosity) data rates anticipated to exceed O(100) GB/s by 2025, without large investments in computing hardware and software development or without severely curtailing the physics reach of HL-LHC experiments. An optimized particle tracking algorithm that scales linearly with LHC luminosity (or events detected), rather than quadratically or worse, may lead by itself to an order of magnitude improvement in the track processing throughput without affecting the track identification performance, hence maintaining the physics performance intact. Here, we present preliminary results comparing traditional Kalman filtering based methods for tracking versus an LSTM approach. We find that an LSTM based solution does not outperform a Kalman fiter based solution, arguing for exploring ways to encode apriori information.
The rise of internet has resulted in an explosion of data consisting of millions of articles, images, songs, and videos. Most of this data is high dimensional and sparse, where the standard compression schemes, such as LSH, become in- efficient due to at least one of the following reasons: 1. Compression length is nearly linear in the dimension and grows inversely with the sparsity 2. Randomness used grows linearly with the product of dimension and compression length. We propose an efficient compression scheme mapping binary vectors into binary vectors and simultaneously preserving Hamming distance and Inner Product. Our schemes avoid all the above mentioned drawbacks for high dimensional sparse data. The length of our compression depends only on the sparsity and is indepenent of the dimension of the data, and our schemes work in the streaming setting as well. We generalize our scheme for real-valued data and obtain compressions for Euclidean distance, Inner Product, and k-way Inner Product.
This workshop explores primitive structural fundaments in information, and then intelligence, as a model of ‘thinking like nature’ (natural informatics). It examines the task of designing a general adaptive intelligence from a low-order (non- anthropic) perspective, to arrive at a least-ambiguous and most-general computa- tional/developmental foundation.
In this paper, we propose to combine neural execution and symbolic execution to query a table with natural languages. Our approach makes use the differentiability of neural networks and transfers (imperfect) knowledge to the symbolic executor before reinforcement learning. Experiments show our approach achieves high learning efficiency, high execution efficiency, high interpretability, as well as high performance.
We describe a mechanism for subsampling sequences and show how to compute its expected output so that it can be trained with standard backpropagation. We test this approach on a simple toy problem and discuss its shortcomings.
In this paper, we propose a model for the classless association between two instances of the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. Our model has two parallel Multilayer Perceptrons (MLPs) and relies on two components. The first component is a EM-training rule that matches the output vectors of a MLP to a statistical distribution. The second component exploits the output classification of one MLP as target of the another MLP in order to learn the agreement of the unknown class. We generate four classless datasets (based on MNIST) with uniform distribution between the classes. Our model is evaluated against totally supervised and totally unsupervised scenarios. In the first scenario, our model reaches good performance in terms of accuracy and the classless constraint. In the second scenario, our model reaches better results against two clustering algorithms.
We investigate different strategies for active learning with Bayesian deep neural networks. We focus our analysis on scenarios where new, unlabeled data is obtained episodically, such as commonly encountered in mobile robotics applications. An evaluation of different strategies for acquisition, updating, and final training on the CIFAR-10 dataset shows that incremental network updates with final training on the accumulated acquisition set are essential for best performance, while limiting the amount of required human labeling labor.
We develop a new class of deep generative model called generative matching networks (GMNs) which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent. Our experiments on the Omniglot dataset demonstrate that GMNs can significantly improve predictive performance on the fly as more additional data is available and generate examples of previously unseen handwritten characters once only a few images of them are provided.
Training a network with small capacity that can perform as well as a larger capacity network is an important problem that needs to be solved in real life applications which require fast inference time and small memory requirement. Previous approaches that transfer knowledge from a bigger network to a smaller network show little benefit when applied to state-of-the-art convolutional neural network architectures such as Residual Network trained with batch normalization. We propose class-distance loss that helps teacher networks to form densely clustered vector space to make it easy for the student network to learn from it. We show that a small network with half the size of the original network trained with the proposed strategy can perform close to the original network on CIFAR-10 dataset.
Caenorhabditis elegans (C. elegans) illustrated remarkable behavioral plasticities including complex non-associative and associative learning representations. Understanding the principles of such mechanisms presumably leads to constructive inspirations for the design of efficient learning algorithms. In the present study, we postulate a novel approach on modeling single neurons and synapses to study the mechanisms underlying learning in the C. elegans nervous system. In this regard, we construct a precise mathematical model of sensory neurons where we include multi-scale details from genes, ion channels and ion pumps, together with a dynamic model of synapses comprised of neurotransmitters and receptors kinetics. We recapitulate mechanosensory habituation mechanism, a non-associative learning process, in which elements of the neural network tune their parameters as a result of repeated input stimuli. Accordingly, we quantitatively demonstrate the roots of such plasticity in the neuronal and synaptic-level representations. Our findings can potentially give rise to the development of new bio-inspired learning algorithms.
In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation. In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods.
Data are often labeled by many different experts, with each expert labeling a small fraction of the data and each sample receiving multiple labels. When experts disagree, the standard approaches are to treat the majority opinion as the truth or to model the truth as a distribution, but these do not make any use of potentially valuable information about which expert produced which label. We propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. We show that our approach performs better than three competing methods in computer-aided diagnosis of diabetic retinopathy.
We present a versatile quantitative framework for comparing representations in deep neural networks, based on Canonical Correlation Analysis, and use it to analyze the dynamics of representation learning during the training process of a deep network. We find that layers converge to their final representation from the bottom-up, but that the representations themselves migrate downwards in the net-work over the course of learning.
Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.
The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, and visualize the implicit importance-weighted approximate posterior.
Multimodal representations of text and images have become popular in recent years. Text however has inherent ambiguities when describing visual scenes, leading to the recent development of datasets with detailed graphical descriptions in the form of scene graphs. We consider the task of joint representation of semantically precise scene graphs and images. We propose models for representing scene graphs and aligning them with images. We investigate methods based on bag-of-words, subpath representations, as well as neural networks. Our investigation proposes and contrasts several models which can address this task and highlights some unique challenges in both designing models and evaluation.
Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128 × 128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128 × 128 samples are more than twice as discriminable as artificially resized 32 × 32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.