Dataset Viewer
forum_id
stringlengths 9
20
| forum_title
stringlengths 4
170
| forum_authors
sequencelengths 0
34
| forum_abstract
stringlengths 17
2.78k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 9
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 19
values | year
stringdate 2016-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence | paper_markdown
stringlengths 0
433k
| pdf_status
stringclasses 4
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|
VAVqG11WmSx0Wk76TAzp | Learning to SMILE(S) | [
"Stanisław Jastrzębski",
"Damian Leśniak",
"Wojciech Marian Czarnecki"
] | This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics.
Connection between these seemingly separate fields is shown by considering standard textual representation of compound, SMILES.
The problem of activity prediction against a target protein is considered, which is a crucial part of computer aided drug design process.
Conducted experiments show that this way one can not only outrank state of the art results of hand crafted representations but also
gets direct structural insights into the way decisions are made. | [
"natural language processing",
"nlp",
"methods",
"problems",
"cheminformatics",
"connection",
"separate fields",
"standard textual representation",
"compound",
"smiles"
] | https://openreview.net/pdf?id=VAVqG11WmSx0Wk76TAzp | https://openreview.net/forum?id=VAVqG11WmSx0Wk76TAzp | ICLR.cc/2016/workshop | 2016 | {
"note_id": [
"yoW88O3o6tr682gwszyW",
"k80qQJLKLfOYKX7ji4Q3",
"q7kg0385MS8LEkD3t7NQ"
],
"note_type": [
"review",
"review",
"review"
],
"note_created": [
1458858094330,
1457038462114,
1457637290140
],
"note_signatures": [
[
"ICLR.cc/2016/workshop/paper/173/reviewer/10"
],
[
"ICLR.cc/2016/workshop/paper/173/reviewer/11"
],
[
"ICLR.cc/2016/workshop/paper/173/reviewer/12"
]
],
"structured_content_str": [
"{\"title\": \"Good work on applying sequence classification advancements to new applications\", \"rating\": \"7: Good paper, accept\", \"review\": \"This works shows that SMILES (walks across a graph of atomic connections) allows the advancements in text classification to be brought to cheminformatics with good results compared to using the latest hand-tuned features. The idea is not particularly novel but the results show how an unrelated area can fairly easily benefit from advancements in DL NLP. The intuition that localised features determine molecular binding seems like a great fit for sentiment analysis techniques.\\n\\nIt would be nice to show the molecule lengths and sizes of dataset in Table 1, I don't think these are mentioned anywhere. It would also be nice to try newer sequence prediction techniques such as LSTMs (possibly with pretraining).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good idea for using text models on string representations of molecules\", \"rating\": \"7: Good paper, accept\", \"review\": \"The idea is excellent, although somewhat obvious. The paper describes training character-based text models directly on SMILES files that encode chemical graphs as strings and then making predictions about the molecules.\", \"reasons_to_accept\": [\"good idea that works at least somewhat\"], \"reasons_to_reject\": [\"limited empirical evaluation\", \"Need to explain the SMILES format well enough that the data augmentation procedure is clear. How is the walk determined normally?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"well done\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"the analogy between CI and sentiment analysis is intriguing and potentially fruitful. the community will definitely appreciate this work. hopefully authors can increase data resources in follow-up work to further improve performance over classifiers with engineered features.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} | Workshop track - ICLR 2016
LEARNING TO SMILE(S)
Stanisław Jastrz˛ebski, Damian Le´sniak & Wojciech Marian Czarnecki
Faculty of Mathematics and Computer Science
Jagiellonian University
Kraków, Poland
[email protected]
ABSTRACT
This paper shows how one can directly apply natural language processing (NLP)
methods to classification problems in cheminformatics. Connection between these
seemingly separate fields is shown by considering standard textual representation
of compound, SMILES. The problem of activity prediction against a target pro-
tein is considered, which is a crucial part of computer aided drug design process.
Conducted experiments show that this way one can not only outrank state of the
art results of hand crafted representations but also gets direct structural insights
into the way decisions are made.
1
INTRODUCTION
Computer aided drug design has become a very popular technique for speeding up the process of
finding new biologically active compounds by drastically reducing number of compounds to be
tested in laboratory. Crucial part of this process is virtual screening, where one considers a set of
molecules and predicts whether the molecules will bind to a given protein. This research focuses on
ligand-based virtual screening, where the problem is modelled as a supervised, binary classification
task using only knowledge about ligands (drug candidates) rather than using information about the
target (protein).
One of the most underrepresented application areas of deep learning (DL) is believed to be chem-
informatics (Unterthiner et al., 2014; Bengio et al., 2012), mostly due the fact that data is naturally
represented as graphs and there are little direct ways of applying DL in such setting (Henaff et al.,
2015). Notable examples of DL successes in this domain are winning entry to Merck competi-
tion in 2012 (Dahl et al., 2014) and Convolutional Neural Network (CNN) used for improving data
representation (Duvenaud et al., 2015). To the authors best knowledge all of the above methods
use hand crafted representations (called fingerprints) or use DL methods in a limited fashion. The
main contribution of the paper is showing that one can directly apply DL methods (without any cus-
tomization) to the textual representation of compound (where characters are atoms and bonds). This
is analogous to recent work showing that state of the art performance in language modelling can
be achieved considering character-level representation of text (Kim et al., 2015; Jozefowicz et al.,
2016).
1.1 REPRESENTING MOLECULES
Standard way of representing compound in any chemical database is called SMILES, which is just
a string of atoms and bonds constructing the molecule (see Fig. 1) using a specific walk over the
graph. Quite surprisingly, this representation is rarely used as a base of machine learning (ML)
methods (Worachartcheewan et al., 2014; Toropov et al., 2010).
Most of the classical ML models used in cheminformatics (such as Support Vector Machines or
Random Forest) work with constant size vector representation through some predefined embedding
(called fingerprints). As a result many such fingerprints have been proposed across the years (Hall
& Kier, 1995; Steinbeck et al., 2003). One of the most common ones are the substructural ones
- analogous of bag of word representation in NLP, where fingerprint is defined as a set of graph
templates (SMARTS), which are then matched against the molecule to produce binary (set of words)
1
Workshop track - ICLR 2016
or count (bag of words) representation. One could ask if this is really necessary, having at one’s
disposal DL methods of feature learning.
1.2 ANALOGY TO SENTIMENT ANALYSIS
The main contribution of this paper is identifying analogy to NLP and specifically sentiment anal-
ysis, which is tested by applying state of the art methods (Mesnil et al., 2014) directly to SMILES
representation. The analogy is motivated by two facts. First, small local changes to structure can
imply large overall activity change (see Fig. 2), just like sentiment is a function of sentiments of
different clauses and their connections, which is the main argument for effectiveness of DL methods
in this task (Socher et al., 2013). Second, perhaps surprisingly, compound graph is almost always
nearly a tree. To confirm this claim we calculate molecules diameters, defined as a maximum over
all atoms of minimum distance between given atom and the longest carbon chain in the molecule. It
appears that in practise analyzed molecules have diameter between 1 and 6 with mean 4. Similarly,
despite the way people write down text, human thoughts are not linear, and sentences can have com-
plex clauses. Concluding, in organic chemistry one can make an analogy between longest carbon
chain and sentence, where branches stemming out of the longest chain are treated as clauses in NLP.
Figure 1: SMILES produced
for the compound in the figure
is N(c1)ccc1N.
Figure 2: Substituting high-
lighted carbon atom with ni-
trogen renders compound inac-
tive.
Figure 3: Visualization of
CNN filters of size 5 for ac-
tive (top row) and inactives
molecules.
2 EXPERIMENTS
Five datasets are considered. Except SMILES, two baseline fingerprint compound representations
are used, namely MACCS Ewing et al. (2006) and Klekota–Roth Klekota & Roth (2008) (KR;
considered state of the art in substructural representation (Czarnecki et al., 2015)). Each dataset is
fairly small (mean size is 3000) and most of the datasets are slightly imbalanced (with mean class
ratio around 1:2). It is worth noting that chemical databases are usually fairly big (ChEMBL size is
1.5M compounds), which hints at possible gains by using semi-supervised learning techniques.
Tested models include both traditional classifiers: Support Vector Machine (SVM) using Jaccard
kernel, Naive Bayes (NB), Random Forest (RF) as well as neural network models: Recurrent Neural
Network Language Model (Mikolov et al., 2011b) (RNNLM), Recurrent Neural Network (RNN)
many to one classifier, Convolutional Neural Network (CNN) and Feed Forward Neural Network
with ReLU activation. Models were selected to fit two criteria: span state of the art models in single
target virtual screening (Czarnecki et al., 2015; Smusz et al., 2013) and also cover state of the art
models in sentiment analysis. For CNN and RNN a form of data augmentation is used, where for
each molecule random SMILES walks are computed and predictions are averaged (not doing so
degrades strongly performance, mostly due to overfitting). For methods which are not designed to
work on string representation (such as SVM, NB, RF, etc.) SMILES are embedded as n-gram models
with simple tokenization ([Na+] becomes a single token). For all the remaining ones, SMILES are
treated as strings composed of 2-chars symbols (thus capturing atom and its relation to the next one).
Using RNNLM, p(compound|active) and p(compound|inactive) are modelled separately and clas-
sification is done through logistic regression fitted on top. For CNN, purely supervised version of
CONTEXT, current state of the art in sentiment analysis (Johnson & Zhang, 2015), is used. Notable
feature of the model is working directly on one-hot representation of the data. Each model is evalu-
ated using 5-fold stratified cross validation. Internal 5-fold grid is used for fitting hyperparameters
(truncated in the case of deep models). We use log loss as an evaluation metric to include both
2
Workshop track - ICLR 2016
Table 1: Log-loss (± std) of each model for a given protein and representation.
model
5-HT1A
5-HT2A
5-HT7
H1
SERT
0.249 ± 0.015
S CNN
E
0.255 ± 0.009
SVM
L
0.274 ± 0.016
I
GRU
M
RNNLM 0.363 ± 0.020
S
0.284 ± 0.026
0.309 ± 0.027
0.340 ± 0.035
0.431 ± 0.025
0.289 ± 0.041
0.302 ± 0.033
0.347 ± 0.045
0.486 ± 0.065
0.182 ± 0.030
0.202 ± 0.037
0.222 ± 0.042
0.283 ± 0.066
0.221 ± 0.032
0.226 ± 0.015
0.269 ± 0.032
0.346 ± 0.102
P
F
R
K
SVM
RF
NN
NB
S SVM
C
C
A
M
RF
NN
NB
0.262 ± 0.016
0.264 ± 0.029
0.285 ± 0.026
0.634 ± 0.045
0.310 ± 0.012
0.261 ± 0.008
0.377 ± 0.005
0.542 ± 0.043
0.311 ± 0.021
0.297 ± 0.012
0.331 ± 0.015
0.788 ± 0.073
0.339 ± 0.017
0.294 ± 0.015
0.422 ± 0.025
0.565 ± 0.014
0.326 ± 0.035
0.322 ± 0.038
0.375 ± 0.072
1.201 ± 0.315
0.382 ± 0.019
0.335 ± 0.034
0.463 ± 0.047
0.660 ± 0.050
0.188 ± 0.022
0.210 ± 0.015
0.232 ± 0.034
0.986 ± 0.331
0.237 ± 0.027
0.202 ± 0.004
0.278 ± 0.027
0.477 ± 0.042
0.226 ± 0.014
0.228 ± 0.022
0.240 ± 0.024
0.726 ± 0.066
0.280 ± 0.030
0.237 ± 0.029
0.369 ± 0.020
0.575 ± 0.017
classification results as well as uncertainty measure provided by models. Similar conclusions are
true for accuracy.
2.1 RESULTS
Results are presented in Table 1. First, simple n-gram models (SVM, RF) performance is close
to hand crafted state of the art representation, which suggests that potentially any NLP classifier
working on n-gram representation might be applicable. Maybe even more interestingly, current state
of the art model for sentiment analysis - CNN - despite small dataset size, outperforms (however by
a small margin) traditional models.
Hyperparameters selected for CNN (CONTEXT) are similar to the parameters reported in (Johnson
& Zhang, 2015). Especially the maximum pooling (as opposed to average pooling) and moderately
sized regions (5 and 3) performed best (see Fig. 3). This effect for NLP is strongly correlated with
the fact that small portion of sentence can contribute strongly to overall sentiment, thus confirming
claimed molecule-sentiment analogy.
RNN classifier’s low performance can be attributed to small dataset sizes, as commonly RNN are
applied to significantly larger volumes of data (Mikolov et al., 2011a). One alternative is to consider
semi-supervised version of RNN (Dai & Le, 2015). Another problem is that compound activity pre-
diction requires remembering very long interactions, especially that neighbouring atoms in SMILES
walk are often disconnected in the original molecule.
3 CONCLUSIONS
This work focuses on the problem of compounds activity prediction without hand crafted features
used to represent complex molecules. Presented analogies with NLP problems, and in particular
sentiment analysis, followed by experiments performed with the use of state of the art methods
from both NLP and cheminformatics seem to confirm that one can actually learn directly from raw
string representation of SMILES instead of currently used embedding.
In particular, performed
experiments show that despite being trained on relatively small datasets, CNN based solution can
actually outperform state of the art methods based on structural fingerprints in ligand-based virtual
screening task. At the same time it gives possibility to easily incorporate unsupervised and semi-
supervised techniques into the models, making use of huge databases of chemical compounds. It
appears that cheminformatics can strongly benefit from NLP and further research in this direction
should be conducted.
ACKNOWLEDGMENTS
First author was supported by Grant No. DI 2014/016644 from Ministry of Science and Higher
Education, Poland.
3
Workshop track - ICLR 2016
REFERENCES
Yoshua Bengio, Aaron Courville, and Pascal Vincent. Unsupervised feature learning and deep
learning: A review and new perspectives. CoRR, abs/1206.5538, 2012. URL http://arxiv.
org/abs/1206.5538.
Wojciech Marian Czarnecki, Sabina Podlewska, and Andrzej Bojarski. Robust optimization of svm
hyperparameters in the classification of bioactive compounds. Journal of Cheminformatics, 7(38),
2015.
George Dahl, Navdeep Jaitly, and Ruslan Salakhutdinov. Multi-task neural networks for QSAR
predictions. CoRR, abs/1406.1231, 2014. URL http://arxiv.org/abs/1406.1231.
Andrew Dai and Quoc Viet Le. Semi-supervised sequence learning. In C. Cortes, N. D. Lawrence,
D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing
Systems 28, pp. 3061–3069. Curran Associates, Inc., 2015. URL http://papers.nips.
cc/paper/5949-semi-supervised-sequence-learning.pdf.
David Kristjanson Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gómez-
Bombarelli, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan Prescott Adams. Convolutional
networks on graphs for learning molecular fingerprints. CoRR, abs/1509.09292, 2015. URL
http://arxiv.org/abs/1509.09292.
Todd Ewing, J. Christian Baber, and Miklos Feher. Novel 2d fingerprints for ligand-based virtual
screening. Journal of Chemical Information and Modeling, 46(6):2423–2431, 2006. URL http:
//dx.doi.org/10.1021/ci060155b.
Lowell Hall and Lemont Kier. Electrotopological state indices for atom types: A novel combina-
tion of electronic, topological, and valence state information. Journal of Chemical Information
and Computer Sciences, 35(6):1039–1045, 1995. URL http://dblp.uni-trier.de/db/
journals/jcisd/jcisd35.html#HallK95.
Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured
data. CoRR, abs/1506.05163, 2015. URL http://arxiv.org/abs/1506.05163.
Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional
neural networks. In NAACL HLT 2015, The 2015 Conference of the North American Chapter
of the Association for Computational Linguistics: Human Language Technologies, pp. 103–112,
2015. URL http://aclweb.org/anthology/N/N15/N15-1011.pdf.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the
limits of language modeling. volume abs/1602.02410, 2016. URL http://arxiv.org/
abs/1602.02410.
Yoon Kim, Yacine Jernite, David Sontag, and Alexander Rush. Character-aware neural language
models. CoRR, abs/1508.06615, 2015. URL http://arxiv.org/abs/1508.06615.
Justin Klekota and Frederick Roth. Chemical substructures that enrich for biological activ-
ity. Bioinformatics, 24(21):2518–2525, 2008. URL http://dblp.uni-trier.de/db/
journals/bioinformatics/bioinformatics24.html#KlekotaR08.
Grégoire Mesnil, Tomas Mikolov, Marc’Aurelio Ranzato, and Yoshua Bengio. Ensemble of genera-
tive and discriminative techniques for sentiment analysis of movie reviews. CoRR, abs/1412.5335,
2014. URL http://arxiv.org/abs/1412.5335.
Tomas Mikolov, Anoop Deoras, Daniel Povey, Lukás Burget, and Jan Cernocký. Strategies for
training large scale neural network language models. In David Nahamoo and Michael Picheny
(eds.), 2011 IEEE Workshop on Automatic Speech Recognition & Understanding, ASRU, pp. 196–
201. IEEE, 2011a. URL http://dx.doi.org/10.1109/ASRU.2011.6163930.
Tomas Mikolov, Stefan Kombrink, Anoop Deoras, Lukar Burget, and Jan Cernocky. Rnnlm-
recurrent neural network language modeling toolkit. Proc. of the 2011 ASRU Workshop, pp.
196–201, 2011b.
4
Workshop track - ICLR 2016
Sabina Smusz, Rafał Kurczab, and Andrzej Bojarski. A multidimensional analysis of machine
learning methods performance in the classification of bioactive compounds. Chemometrics and
Intelligent Laboratory Systems, 128:89–100, 2013.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng,
and Christopher Potts. Recursive Deep Models for Semantic Compositionality Over a Sentiment
Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language
Processing, pp. 1631–1642, 2013. URL http://www.aclweb.org/anthology-new/
D/D13/D13-1170.bib.
Christoph Steinbeck, Yongquan Han, Stefan Kuhn, Oliver Horlacher, Edgar Luttmann, and Egon
Willighagen. The chemistry development kit (cdk): An open-source java library for chemo- and
bioinformatics. Journal of Chemical Information and Computer Sciences, 43(2):493–500, 2003.
Andrey Toropov, Alla Toropova, and E Benfenati. Smiles-based optimal descriptors: Qsar modeling
of carcinogenicity by balance of correlations with ideal slopes. European journal of medicinal
chemistry, 45(9):3581—3587, September 2010. URL http://dx.doi.org/10.1016/j.
ejmech.2010.05.002.
Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Jörg Wenger, Hugo Ceule-
mans, and Sepp Hochreiter. Deep learning as an opportunity in virtual screening. Deep Learning
and Representation Learning Workshop (NIPS 2014), 2014.
Apilak Worachartcheewan, Prasit Mandi, Virapong Prachayasittikul, Alla Toropova, Andrey
Toropov, and Chanin Nantasenamat. Large-scale qsar study of aromatase inhibitors using smiles-
based descriptors. Chemometrics and Intelligent Laboratory Systems, 138(Complete):120–126,
2014.
5
| success |
|
mO9m5Rrm6tj1gPZ3UlOX | Contextual convolutional neural network filtering improves EM image segmentation | [
"Xundong Wu",
"Yong Wu",
"Ligia Toro",
"Enrico Stefani"
] | We designed a contextual filtering algorithm for improving the quality of image segmentation. The algorithm was applied on the task of building the Membrane Detection Probability Maps (MDPM) for segmenting electron microscopy (EM) images of brain tissues. To achieve this, we executed supervised training of a convolutional neural network to recover the ground-truth label of the masked-out center pixel from patches sampled from an un-refined MDPM. Through this training process the model learns the distribution of the segmentation ground-truth map . By applying this trained network over MDPMs we are able to integrate contextual information to obtain with better spatial consistency in the high-level representation space. By iteratively applying this network over the MDPMs for multiple rounds, we were able to significantly improve the EM image segmentation results. | [
"mdpm",
"mdpms",
"able",
"contextual filtering algorithm",
"quality",
"image segmentation",
"algorithm"
] | https://openreview.net/pdf?id=mO9m5Rrm6tj1gPZ3UlOX | https://openreview.net/forum?id=mO9m5Rrm6tj1gPZ3UlOX | ICLR.cc/2016/workshop | 2016 | {
"note_id": [
"vlpO4kGWBu7OYLG5inyZ",
"ANYym5MWWSNrwlgXCqMV",
"D1VM0VvvqS5jEJ1zfERV"
],
"note_type": [
"review",
"review",
"review"
],
"note_created": [
1458134889538,
1458057144427,
1458221343179
],
"note_signatures": [
[
"ICLR.cc/2016/workshop/paper/176/reviewer/11"
],
[
"ICLR.cc/2016/workshop/paper/176/reviewer/10"
],
[
"ICLR.cc/2016/workshop/paper/176/reviewer/12"
]
],
"structured_content_str": [
"{\"title\": \"Review of Contextual convolutional neural network filtering improves EM image segmentation\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper presents an iterative method to progressively clean up segmentation maps, by repeatedly applying a CNN to the output of the previous iteration. The novelty seems to lie in the application domain rather than in the network/model/training regime. The paper is quite unclearly written: I was unsure what the architecture of the network is, and how the iteration is applied. For the results, only 1 example is provided, which is insufficient, even for a workshop paper.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review of CONTEXTUAL CONVOLUTIONAL NEURAL NETWORK FILTERING IMPROVES EM IMAGE SEGMENTATION\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The description of the authors approach is not clear and would need to be re-writen. As far as I understand, they generate pixel probability maps to belong to the foreground/background that they give as input of the CNN of Ciceran et al. 2012 instead of the image patches, and iteratively use the obtained output to feed again Ciseran et al's CNN.\\n\\nHowever, I am not sure I fully understand what the authors really did because I don't see why they write \\u201cIt is also important to point out that training with the ground-truth map directly provided no benefit in improving the segmentation quality\\u201d The authors evaluate the results of their approach on the neuronal image dataset from the ISBI 2012 challenge.\\n\\nThe novely of the approach is not high but sufficient for a workshop submission and results are convincing, the major remaining problem remains the clarity of the paper.\", \"missing_related_work\": \"Turaga et al, Convolutional networks can learn to generate affinity graphs for image segmentation, 2010.\", \"minor\": \"abstract: no space before a dot by training -> train introduction of I-CNN: we don't understand in the first reading that it refers to the authors method proposition\\n\\nIn Jurrus et al. (2010); Pinheiro & Collobert (2013); Lee et al. (2015); Tu\\n\\n(2008); Tu & Bai (2010), they applied\\u2026 -> Jurrus et al. (2010); Pinheiro & Collobert (2013); Lee et al. (2015); Tu (2008); Tu & Bai (2010) applied \\u2026\\n\\nFigure1 -> Figure 1\\n3 SYSTEM DESCRIPTION AND RESULT -> 3 SYSTEM DESCRIPTION\", \"reference_list\": \"please remove the reference that are not cited in the text or add citations in the text.\\n\\nhao -> Hao\\nwater-shed -> watershed\", \"in_the_conclusion\": \"\\u201cthe new algorithm\\u201d -> the new procedure\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper describes a method to clean up and improve boundary maps obtained with CNNs by processing them with additional CNNs. The reported results seem impressive (although, I have never worked on this application). Unfortunately, the description of the method is essentially lacking. From what I understood, the method is very similar to several previous works including:\\n\\nVolodymyr Mnih, Geoffrey E. Hinton:\\nLearning to Detect Roads in High-Resolution Aerial Images. ECCV (6) 2010 (not cited)\", \"a_seminal_paper_from_the_pre_deep_learning_era\": \"\", \"zhuowen_tu\": \"Auto-context and its application to high-level vision tasks. CVPR 2008 (cited)\\n\\nPerhaps, the detailization of the algorithm and the amount of novelty are not sufficient for acceptance to ICLR that focuses on new approaches and algorithms for learning representations. However I can imagine that venues/conferences/workshops for people working on membrane segmentation/connectomics would be interested.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} | Workshop track - ICLR 2016
CONTEXTUAL CONVOLUTIONAL NEURAL NETWORK
FILTERING IMPROVES EM IMAGE SEGMENTATION
Xundong Wu, Yong Wu, Ligia Toro & Enrico Stefani
Department of Anesthesiology
University of California, Los Angeles
{xundong,wuyong,ltoro,estefani}@ucla.edu
ABSTRACT
We designed a contextual filtering algorithm for improving the quality of image
segmentation. The algorithm was applied on the task of building the Membrane
Detection Probability Maps (MDPM) for segmenting electron microscopy (EM)
images of brain tissues. To achieve this, we executed supervised training of a con-
volutional neural network to recover the ground-truth label of the masked-out cen-
ter pixel from patches sampled from an un-refined MDPM. Through this training
process the model learns the distribution of the segmentation ground-truth map .
By applying this trained network over MDPMs we are able to integrate contextual
information to obtain with better spatial consistency in the high-level represen-
tation space. By iteratively applying this network over the MDPMs for multiple
rounds, we were able to significantly improve the EM image segmentation results.
1
INTRODUCTION
To further expand our understanding on the structure and the working mechanisms of the human
brain, it is necessary to map the entire neural connections of the nervous system at the micro-scale
level. One main approach is to acquire serial 2-D images of brain tissues at nanometric resolution
with serial-section Transmitted Electron Microscopy (ssTEM). Much effort has been made to de-
velop tools to automatically process those images. Previous approaches Ciresan et al. (2012)Jurrus
et al. (2010)Jain et al. (2007) use the contextual information surrounding a pixel to assign a probabil-
ity value of it representing the cell membrane. Applying those detectors to every pixel of an original
EM image leads to a pixel wise Membrane Detection Probability Map (MDPM). Post-processing on
top of these detection maps is necessary in order to obtain the final region segmentation results. The
post-processing methods can be simply, for example, simple thresholding, or creating a smoother
probability map by using a median filter Ciresan et al. (2012).
In this work, we simply by training an Iterative Convolutional Neural Network (I-CNN) to recover
the masked-out center pixel value of patches sampled from MDPMs, and then iteratively applying
this network over the resultant MDPMs, one can obtain a high quality segmentation map both visu-
ally and measured by the foreground-restrict Rand score Arganda-Carreras et al. (2015); Jain et al.
(2010).
2 RELATED WORKS
Computer vision research on real-world image contour detection and segmentation tasks has come
up with many solutions to ensure the consistency of image segmentation. For the EM image segmen-
tation task, the detection of membrane resembles the contour detection problem in general computer
vision. The quality of membrane detection can be directly measured by the pixels error: but just like
in contour detection, a high quality membrane detection does not guarantee a good segmentation
Arbelaez et al. (2011). A small gap in the contour formed by the detected membrane can lead to
an incorrect merge of two different regions, or a false section of membrane can incorrectly split one
region into two parts.
In a typical MDPM such as the one from Ciresan et al. (2012), the detection probability of every
pixel is made in isolation. In Ciresan et al. (2012), the authors used a simple median filter with
1
Workshop track - ICLR 2016
(a) Original image
(b) Original P map
(c) Round 8
(d) Ground truth
Figure 1: One sample image processed by I-CNN. Probability map is shown in reverse manner. The
darkest pixel value means that the probability of being membrane is 1. Original image is a raw EM
image; Original P map is the MDPM output from the base CNN network. Round 8 are the maps
processed by I-CNN for 8 rounds. Ground truth is the map labeled by a human expert.
a small radius to smooth out the detection map. However, the limitation of applying a median
filter is its isotropy:
if one applies a median filter with an increasing radius to the MDPMs the
performance will quickly deteriorate as the radius becomes larger. Therefore, an algorithm that can
take long-range information and avoid isotropy smoothing was needed. To achieve this goal, we
have developed an Iterative Convolutional Neural Network (I-CNN) that significantly improves the
definition of boundaries. In Jurrus et al. (2010); Pinheiro & Collobert (2013); Lee et al. (2015); Tu
(2008); Tu & Bai (2010), they applied a network that conditioned on both the raw image features and
the previous round label map to recursively refine the label maps. Our approach differs from theirs
in that the probability maps are refined without directly conditioned on the raw image features.
3 SYSTEM DESCRIPTION AND RESULT
The dataset used in this experiment consists of two stacks of EM images used in the ISBI 2012 EM
segmentation challenge. One stack has 30 EM images and their corresponding labels for training.
The other stack also contain 30 images, and their labels are concealed.
The network we used in this stage is analogous to the convolutional network implemented by Ciresan
et al. (2012). To differentiate the base network and the iterative secondary network, we call this base
network CNN and the secondary Network I-CNN. All detail about both CNN and I-CNN networks
training will be released with the code very soon. Next, we will describe our approach to refine the
probability map. For this task, we generated an non-overfitted MDPM training set through cross
training.
The pixel-wise probability map generated from the network described in the last section shows high
pixel-wise accuracy yet it was short of local consistency in certain areas (see Figure1, 2 ), which is
inconsistent with the spatial continuity of the cell membrane. In the approach described in the last
two sections, although contextual information is used to generate the pixel detection probability val-
ues, those probability values are generated independently. Here we propose a simple convolutional
network (I-CNN) which directly learns the statistics of segmentation maps to significantly improve
the segmentation quality. The main difference between I-CNN network and the previous CNN is
input. In the CNN network, the input to the network is the raw EM image. In I-CNN, the input
image patches are replaced by the patches extracted from MDPM, while the label of the center pixel
is masked out.
4 RESULT
As shown in Figure1, when we iteratively applied the I-CNN to the MDPM, the first thing we notice
is that by iteratively refining the probability, we removed the noise in the map. After 8 rounds, the
map turned out to be a map with rather clear boundaries as opposed to the fuzzy boundaries in the
original MDPM. If we zoom into areas where the CNN was unable to make an affirmative inference
about pixels, as shown in Figure2, we can see that the I-CNN is able to integrate the information in
2
Workshop track - ICLR 2016
EM images
Initial MDPM
Final probability map
Ground truth
Figure 2: Examples of patches where the model closes gaps and removes uncertain membrane sec-
tions. Blue arrows indicate where the model adds or solidifies a section of link between membrane
parts; red arrows indicate where the model removes sections of uncertain membrane.
a neighbourhood to recognize (blue arrows) a section of membrane shown with low probability but
with good spatial continuity, and eventually label the section with high confidence and closes gaps
at the boundary. At the same time, the I-CNN was able to identify noise pixel and areas (Red arrow)
that do not appear like a section of membrane and eventually completely removed it.
We then measured the segmentation result by the Rand score used by the ISBI 2012 chal-
lengeArganda-Carreras et al. (2015). For this part of experiment, trained I-CNNs were applied to
left-out validation MDPM set; their segmentation error scores were then measured for every round
by the Rand score. We observed that by iteratively applying the I-CNNs on the membrane detection
map one can dramatically reduce the Rand error of the segmentation result at the beginning of the
iteration. This reduction in the Rand error disappears only after about 6 rounds of iterations and
afterwards deteriorates. We also applied the I-CNN to the test image stack submitted the result to
the ISBI 2012 challenge website obtaining a Rand error score of 0.0263, which is much better than
the score of 0.0551 obtained from the original CNN result before refining.
We also applied our secondary network to a set of MDPMs of better qualify from Chen et al. (2016).
Even though our network was not trained to process the exact same kind of data, the refine process
still managed to significantly reduce the Rand error from 0.0351 to 0.0255. Furthermore, according
to a recent update from the organizer of the ISBI 2012 Arganda-Carreras et al. (2015), with their
new evaluation method (thinned rand score), our post-processing obtained a thinned Rand score
of 0.9765, which is just very small fraction behind 0.9768, the score of Chen et al. (2016). This
indicates that our approach performed in par with the water-shed algorithm used by Chen et al.
(2016) when measured with the new Rand score.
5 CONCLUSION AND DISCUSSION
The new algorithm presented in this work learns the manifold of membrane morphology distribution;
it enforces these constraints through iteration on a MDPM, refining it to fit a membrane morphol-
ogy distribution learned from the training data. From another perspective, instead of generating a
membrane detection probability of every pixel in isolation, we congregated information in the local
neighbourhood through applying the I-CNN iteratively to the MDPMs and obtained significantly
better consistency in neighbouring pixels. A significant improvement, measured by the Rand error,
was achieved over the original MDPM result. It is also important to point out that training with
the ground-truth map directly provided no benefit in improving the segmentation quality. It seems
to be essential to learn the gradient field that can guide a raw MDPM toward the manifold of the
ground-truth maps by train from un-refined MDPM.
ACKNOWLEDGMENTS
We would like to thank Dr. Riccardo Olcese for helpful discussions. We are also gratefully for the
support of NVIDIA Corporation with the donation of two GPUs. This research was supported by
NIH R01HL107418 grant.
3
Workshop track - ICLR 2016
REFERENCES
Bj¨orn Andres, Ullrich K¨othe, Moritz Helmstaedter, Winfried Denk, and Fred A Hamprecht. Seg-
mentation of sbfsem volume data of neural tissue by hierarchical classification. In Pattern recog-
nition, pp. 142–152. Springer, 2008.
Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hier-
archical image segmentation. Pattern Analysis and Machine Intelligence, IEEE Transactions on,
33(5):898–916, 2011.
Ignacio Arganda-Carreras, Srinivas C Turaga, Daniel R Berger, Dan Cires¸an, Alessandro Giusti,
Luca M Gambardella, J¨urgen Schmidhuber, Dmitry Laptev, Sarvesh Dwivedi, Joachim M Buh-
mann, et al. Crowdsourcing the creation of image segmentation algorithms for connectomics.
Frontiers in neuroanatomy, 9, 2015.
Davi D Bock, Wei-Chung Allen Lee, Aaron M Kerlin, Mark L Andermann, Greg Hood, Arthur W
Wetzel, Sergey Yurgenson, Edward R Soucy, Hyon Suk Kim, and R Clay Reid. Network anatomy
and in vivo physiology of visual cortical neurons. Nature, 471(7337):177–182, 2011.
Albert Cardona, Stephan Saalfeld, Stephan Preibisch, Benjamin Schmid, Anchi Cheng, Jim Pulokas,
Pavel Tomancak, and Volker Hartenstein. An integrated micro-and macroarchitectural analysis
of the drosophila brain by computer-assisted serial section electron microscopy. PLoS biology, 8
(10):2564, 2010.
hao Chen, Xiaojuan Qi, Jiezhi Cheng, and Phengann Heng. Deep contextual networks for neuronal
structure segmentation. In Thirtieth AAAI Conference on Artificial Inteligence. AAAI, 2016.
Dmitri B Chklovskii, Shiv Vitaladevuni, and Louis K Scheffer. Semi-automated reconstruction of
neural circuits using electron microscopy. Current opinion in neurobiology, 20(5):667–675, 2010.
Dan Ciresan, Alessandro Giusti, Luca M Gambardella, and J¨urgen Schmidhuber. Deep neural net-
works segment neuronal membranes in electron microscopy images. In Advances in neural infor-
mation processing systems, pp. 2843–2851, 2012.
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Learning a deep convolutional
In Computer Vision–ECCV 2014, pp. 184–199. Springer,
network for image super-resolution.
2014.
Clement Farabet, Camille Couprie, Laurent Najman, and Yann LeCun. Scene parsing with multi-
scale feature learning, purity trees, and optimal covers. arXiv preprint arXiv:1202.2160, 2012.
Jan Funke, Bj¨orn Andres, Fred A Hamprecht, Albert Cardona, and Matthew Cook. Multi-hypothesis
crf-segmentation of neural tissue in anisotropic em volumes. Arxiv preprint, 2011.
Stephen Gould, Richard Fulton, and Daphne Koller. Decomposing a scene into geometric and
semantically consistent regions. In Computer Vision, 2009 IEEE 12th International Conference
on, pp. 1–8. IEEE, 2009.
Viren Jain and Sebastian Seung. Natural image denoising with convolutional networks. In Advances
in Neural Information Processing Systems, pp. 769–776, 2009.
Viren Jain, Joseph F Murray, Fabian Roth, Srinivas Turaga, Valentin Zhigulin, Kevin L Briggman,
Moritz N Helmstaedter, Winfried Denk, and H Sebastian Seung. Supervised learning of image
restoration with convolutional networks. In Computer Vision, 2007. ICCV 2007. IEEE 11th In-
ternational Conference on, pp. 1–8. IEEE, 2007.
Viren Jain, Benjamin Bollmann, Mark Richardson, Daniel R Berger, Moritz N Helmstaedter,
Kevin L Briggman, Winfried Denk, Jared B Bowden, John M Mendenhall, Wickliffe C Abra-
ham, et al. Boundary learning by optimization with topological constraints. In Computer Vision
and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2488–2495. IEEE, 2010.
Elizabeth Jurrus, Antonio RC Paiva, Shigeki Watanabe, James R Anderson, Bryan W Jones, Ross T
Whitaker, Erik M Jorgensen, Robert E Marc, and Tolga Tasdizen. Detection of neuron membranes
in electron microscopy images using a serial neural network architecture. Medical image analysis,
14(6):770–783, 2010.
4
Workshop track - ICLR 2016
Kisuk Lee, Aleksandar Zlateski, Ashwin Vishwanathan, and H Sebastian Seung. Recursive
arXiv preprint
training of 2d-3d convolutional networks for neuronal boundary detection.
arXiv:1508.04843, 2015.
Ting Liu, Elizabeth Jurrus, Mojtaba Seyedhosseini, Mark Ellisman, and Tolga Tasdizen. Watershed
In Pattern Recognition
merge tree classification for electron microscopy image segmentation.
(ICPR), 2012 21st International Conference on, pp. 133–137. IEEE, 2012.
Ting Liu, Mojtaba Seyedhosseini, Mark Ellisman, and Tolga Tasdizen. Watershed merge forest
In Image Processing (ICIP),
classification for electron microscopy image stack segmentation.
2013 20th IEEE International Conference on, pp. 4069–4073. IEEE, 2013.
Pedro HO Pinheiro and Ronan Collobert. Recurrent convolutional neural networks for scene parsing.
arXiv preprint arXiv:1306.2795, 2013.
Chris Russell, Pushmeet Kohli, Philip HS Torr, et al. Associative hierarchical crfs for object class
image segmentation. In Computer Vision, 2009 IEEE 12th International Conference on, pp. 739–
746. IEEE, 2009.
Olaf Sporns, Giulio Tononi, and Rolf K¨otter. The human connectome: a structural description of
the human brain. PLoS Comput Biol, 1(4):e42, 2005.
Joseph Tighe and Svetlana Lazebnik. Superparsing: scalable nonparametric image parsing with
superpixels. In Computer Vision–ECCV 2010, pp. 352–365. Springer, 2010.
Zhuowen Tu. Auto-context and its application to high-level vision tasks. In Computer Vision and
Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1–8. IEEE, 2008.
Zhuowen Tu and Xiang Bai. Auto-context and its application to high-level vision tasks and 3d brain
image segmentation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(10):
1744–1757, 2010.
5
| success |
|
k80kn82ywfOYKX7ji42O | HARDWARE-FRIENDLY CONVOLUTIONAL NEURAL NETWORK WITH EVEN-NUMBER FILTER SIZE | [
"Song Yao",
"Song Han",
"Kaiyuan Guo",
"Jianqiao Wangni",
"Yu Wang"
] | Convolutional Neural Network (CNN) has led to great advances in computer vision. Various customized CNN accelerators on embedded FPGA or ASIC platforms have been designed to accelerate CNN and improve energy efficiency. However, the odd-number filter size in existing CNN models prevents hardware accelerators from having optimal efficiency. In this paper, we analyze the influences of filter size on CNN accelerator performance and show that even-number filter size is much more hardware-friendly that can ensure high bandwidth and resource utilization. Experimental results on MNIST and CIFAR-10 demonstrate that hardware-friendly even kernel CNNs can reduce the FLOPs by 1.4x to 2x with comparable accuracy; With same FLOPs, even kernel can have even higher accuracy than odd size kernel.
| [
"filter size",
"convolutional neural network",
"cnn",
"flops",
"great advances",
"computer vision",
"embedded fpga",
"asic platforms"
] | https://openreview.net/pdf?id=k80kn82ywfOYKX7ji42O | https://openreview.net/forum?id=k80kn82ywfOYKX7ji42O | ICLR.cc/2016/workshop | 2016 | {
"note_id": [
"nx937z6nMu7lP3z2ioNm",
"5QzBR8G84FZgXpo7i324"
],
"note_type": [
"review",
"review"
],
"note_created": [
1456589526138,
1457552302912
],
"note_signatures": [
[
"~Lingxi_Xie1"
],
[
"ICLR.cc/2016/workshop/paper/122/reviewer/11"
]
],
"structured_content_str": [
"{\"title\": \"This paper provides an interesting and instructive discussion to the industrial community\", \"rating\": \"7: Good paper, accept\", \"review\": \"In this paper, the authors present a fact that neural networks may become less efficient when odd-sized convolution kernels (like 3x3, 5x5 kernels) are used. The main consideration is from the implementation of the inner-product operation in hardware.\", \"figure_1_is_quite_intuitive\": \"one can catch the main idea by taking a glance at it.\", \"experimental_results_are_acceptable\": \"with smaller kernels, the recognition performance is comparable while the FLOPs are effectively reduced. It would be better if this idea is verified on some larger experiments such as SVHN and ImageNet.\\n\\nMinor things. (1) The mathematical notations can be more formal: in representing the network structure (20Conv5 ...), please use \\\\rm{Conv} or \\\\mathrm{Conv}, also please replace all the 'x' to '\\\\times' (in 'Conclusion'). (2) Please magnify the font size in both figures, until one can read it clearly on a printed version of the paper. (3) The fonts of digits in Figure 2(a) and Figure 2(b) are different, which is weird.\\n\\nIn conclusion, this is a good workshop paper that tells the community a simple yet useful fact.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"This paper brings attention to the fact that even-number filter sizes can maximize the efficacy of CNN accelerators.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors bring attention to the fact that odd-number filter sizes waste computational resources; even-number filter sizes can maximize the efficacy of CNN accelerators. They are able to reduce the complexity of LeNet and VGG11-Nagadomi network with comparable performance in accuracy.\\n\\nFigure 1 is very good to understand what the paper is about. \\n\\nFigure 2, on the other hand, is hard to understand; caption doesn't provide enough information. \\nFor Figure 2a, why are there two sizes for each test error and normalized complexity bars? If they are the size of the first and second layer filters, why do 8x8, 4x4 filters have less complexity compared to 4x4, 4x4 filters?\\nIn Figure 2b there are two bar sets for 2x2 filters, later in the text it appears that one uses more feature maps, this information should be at least in the caption if not in the chart.\\n\\nOverall, the idea is useful and good to keep in mind.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} | Workshop track - ICLR 2016
HARDWARE-FRIENDLY CONVOLUTIONAL NEURAL
NETWORK WITH EVEN-NUMBER FILTER SIZE
Song Yao, Kaiyuan Guo, Jianqiao Wangni, Yu Wang
Center for Brain-Inspired Computer Research
Department of Electronic Engineering, Tsinghua University
Beijing 100084, China
{songyao, yu-wang}@mail.tsinghua.edu.cn,
{gky15, wnjq11}@mails.tsinghua.edu.cn
Song Han
Department of Electrical Engineering
Stanford University
Stanford, CA 94315, USA
[email protected]
ABSTRACT
Convolutional Neural Network (CNN) has led to great advances in computer vi-
sion. Various customized CNN accelerators on embedded FPGA or ASIC plat-
forms have been designed to accelerate CNN and improve energy efficiency. How-
ever, the odd-number filter size in existing CNN models prevents hardware accel-
erators from having optimal efficiency. In this paper, we analyze the influences
of filter size on CNN accelerator performance and show that even-number fil-
ter size is much more hardware-friendly that can ensure high bandwidth and re-
source utilization. Experimental results on MNIST and CIFAR-10 demonstrate
that hardware-friendly even kernel CNNs can reduce the FLOPs by 1.4× to 2×
with comparable accuracy; With same FLOPs, even kernel can have even higher
accuracy than odd size kernel.
1
INTRODUCTION
In recent years, Convolutional Neural Network (CNN) has achieved great success in computer vision
area. State-of-the-art performance of image classification and object detection are both driven by
CNN ((He et al., 2015; Girshick et al., 2014; Ren et al., 2015)). However, the energy efficiency of
existing hardware such as GPU is relatively low, thus researchers have proposed various customized
CNN accelerator designs on FPGA or ASIC.
Efficient processing engine (PE) is vital to CNN accelerators. Architecture with few complex PEs
((Qiu et al., 2016; Sim et al., 2016)) or with many simple compute elements (Chen et al. (2016))
have been proposed. Special architectures such as a dynamically configurable architecture and a
specific architecture for sparse compressed NN were also proposed ((Chakradhar et al., 2010; Han
et al., 2016a)).
The efficiency of memory system in CNN accelerators also significantly affects the performance.
The tiling strategy and data reuse are useful to reduce the total communication traffic (Chen et al.
(2014a); Qiu et al. (2016)). Storing all the CNN model with on-chip memory can help minimize
energy of memory access (Chen et al. (2014b); Du et al. (2015); Han et al. (2016a)). Compression
and decompression techniques(Zhang et al. (2015); Chen et al. (2016); Han et al. (2015; 2016b)) and
data quantization (Qiu et al. (2016)) are also useful techniques to improve bandwidth utilization.
Though techniques have been proposed to improve the performance of customized CNN acceler-
ators, the odd-number filter size in existing CNNs still hinders higher hardware acceleration ef-
ficiency. From algorithm aspect, the advantage of odd-number filter size is obvious: symmetry.
However, customized CNN accelerators may perform better with even-number Conv filters such as
2×2 and 4×4 and can achieve better configurability and resource utilization.
In this paper, we investigate the effects of Conv filter size on hardware acceleration efficiency of
CNN accelerators. We propose the hardware-friendly CNN with only even-number Conv filters to
maximize the efficacy of CNN accelerators. We show that hardware-friendly CNNs can achieve
comparable or even better accuracy compared with CNN with odd-number Conv filters on MNIST
and CIFAR-10.
1
Workshop track - ICLR 2016
Figure 1: Influences of filter size on hardware design: Adder tree structure with (a) 3×3 filter and (b) 2×2
filter; Memory access pattern with (c) 3×3 filter and (d) 2×2 filter.
2
INFLUENCES OF FILTER SIZE ON HARDWARE ACCELERATION EFFICIENCY
2.1 COMPUTATION LOGIC DESIGN
The combination of many multipliers and an adder tree is a fundamental unit for accelerating Conv
layers. For the adder tree, if the number of data in a filter is not 2n form, there will be extra register
used. As shown in Figure 1 (a), the 3 extra register sets are needed to implement an adder tree with
9 inputs. If 16-bit quantization (i.e. each parameter is represented with 16 bits) is employed, this
means 16 × 3 = 48 additional registers are needed. For a 2×2 filter, as shown in Figure 1 (b), there
is no such waste of registers.
2.2 DATA DISTRIBUTOR DESIGN
State-of-the-art CNNs for large-scale object recognition tasks are too large to be store the model
on-chip. Since CNN models are usually stored in the external memory, the bandwidth utilization
efficiency is seriously concerned. Typically, DRAM offers 64-bit or 128-bit data port. If the length
of the fetched data is folds of the data port width, full bandwidth utilization can be ensured.
It is hard to ensure high bandwidth utilization with odd-number filters. For a 3×3 filter with 16-bit
quantization, 144 bits are needed to store the weights in a filter. For a 64-bit port, to load 144 bits,
triple memory accesses are needed, as shown in Figure 1 (c), and the highest possible bandwidth
usage is only 75%. For a 128-bit port, the highest possible bandwidth usage is only 56.25%. To
fully utilize the bandwidth when the filters are in odd-number sizes, the data distributor design will
be quite complicated.
Even-number filters can help ensure the bandwidth utilization. For a 2N×2N filter with 16-bit
quantization, where N is a natural number, the total number of bits is 64N2. For a 64-bit port, the
bandwidth utilization is definitely 100%. For a 128-bit port, the bandwidth usage can be up to 100%
(loading two filters at the same time), 90%, and 96% when N is 1, 2, and 3 respectively. An example
is shown in Figure 1 (d) where the filter size is 2×2. When the data port width is 64-bit and 16-bit
quantization is employed, only one-time memory access is needed to load all the weights.
3 HARDWARE-FRIENDLY CONVOLUTIONAL NEURAL NETWORK
Since CNNs with even-number Conv filters can help improve the efficiency of customized CNN
accelerators, we propose the hardware-friendly CNN with only even-number Conv filters. In this
section, we evaluate the performance of hardware-friendly CNNs on MNIST (LeCun et al. (1998))
and CIFAR-10 (Krizhevsky & Hinton (2009)). All experiments are done with MXnet Chen et al.
(2015). The experiment platform consists of an Intel Xeon E5-2690 [email protected] and the 2
NVIDIA TITAN X GPUs.
The notation is: MP means max pooling, FC means Fully-Connected layer, lr is the initial learning
rate, lr-factor is the factor that times the learning rate for every lr-factor-epoch, and batch-size is
the number of images in each mini batch. When training the networks, no data augmentation, pre-
processing, or pre-training is employed.
2
++++++++FFFFFFResultFFFeature mapConvolution kernelOperatorFlip-flops+Adder+++Result(a)(b)(c)(d)12313(cid:104)3 Filter2(cid:104)2 FilterWorkshop track - ICLR 2016
Figure 2: Test error and normalized computational complexities (FLOPs) of (a) LeNet5 on MNIST and (b)
VGG11-Nagadomi on CIFAR-10. With comparable accuracy, even kernel can reduce the FLOP by 50% on
cifar dataset and 30% on mnist dataset; with comparable FLOPs, even kernel can have higher accuracy than
odd size kernel. .
3.1 MNIST
For experiments on MNIST, we used the LeNet (LeCun et al. (1998)). The architecture of the
original LeNet is:
20Conv5 → T anh → M P 2 → 50Conv5 → T anh → M P 2 → F C500 → T anh → F C10.
We train the LeNet for 300 epochs, in which the lr is 0.002, lr-factor is 0.995, lr-factor-epoch is 1,
and batch-size is 128.
We report he best validation error rate of LeNet with different settings on MNIST in Figure 2 (a), in
which blue and orange columns represent test errors and computational complexities respectively.
As shown in the figure, replacing the 5×5 Conv filters in LeNet with 4×4 or other even-number
ones does not introduce high error rate. Since smaller Conv filter demands few multiplications in
one Conv operation, generally, the total number of operations can be reduce by using smaller even-
number Conv filters. But the increase of feature map size lead to the increase of total computations.
3.2 CIFAR-10
We used the VGG11-Nagadomi network (nag) in experiments on CIFAR-10. The architecture of the
original VGG11-Nagadomi network is:
2 × (64Conv3 → ReLU ) → M P 2 → 2 × (128Conv3 → ReLU ) → M P 2 →
4 × (256Conv3 → ReLU ) → M P 2 → 2 × (F C1024 → ReLU ) → F C10.
We train the VGG11-Nagadomi for 2000 epochs, in which the lr is 0.01, lr-factor is 0.995, lr-factor-
epoch is 2, and batch-size is 256.
Results of VGG11-Nagadomi network on CIFAR-10 are shown in Figure 2 (b). For the original
VGG11-Nagadomi network, the validation error on CIFAR-10 is 8.54%. After replacing the 3×3
Conv filters with 2×2 ones, the size of feature maps in the network changes. We remove the padding
in the later Conv layer in each pair of Conv layers to ensure the input feature map of each MP layer
remains the same. As the middle columns in Figure 2 (b) show, the validation error rises to 8.67%,
but the total computations is reduce to 49% of the original network. Since the total computation
number is reduced when simply replacing 3×3 Conv filters with 2×2 ones, we increase the filter
numbers and the out feature vector length of FC layers by 1.5× to balance the total operations. In
this case, the total computations rise to 1.10× compared with the original network but the test error
is reduced to 7.86%. We notice that, keeping the original ratio between filter number in different
layers when balancing the total computations may be favorable to achieve the best accuracy.
4 CONCLUSION
In this paper we propose hardware-friendly convolution neural network using even-sized kernel and
its advantage over traditional odd-sized kernel. We analyzed the hardware benefit of even sized
kernel w.r.t both arithmetic unit and memory system. Even sized kernel greatly reduces the number
of computation while maintaining comparable prediction accuracy: on mnist on cifar-10 it reduced
the computation by 1.4× to 2× with less than 0.1% loss of accuracy. On the other hand, shrinking
the kernel from 3x3 to 2x2 at the same time of increasing the number of channels, such that the
total number of computation remains the same, will result in better prediction accuracy. This will
facilitate building hardware inference engine with higher efficiency.
3
(a) (b)0.90 0.92 0.87 0.99 0.91 1.09 1.00 1.05 0.87 0.73 0.70 0.86 0.000.200.400.600.801.001.205(cid:104)5(cid:712)5(cid:104)54(cid:104)4(cid:712)4(cid:104)48(cid:104)8(cid:712)4(cid:104)44(cid:104)4(cid:712)2(cid:104)28(cid:104)8(cid:712)2(cid:104)22(cid:104)2(cid:712)2(cid:104)2Test Error (%)Normalized Complexity8.54 8.67 7.86 1.00 0.49 1.10 0.002.004.006.008.0010.003×32×22×2Test Error (%)Normalized ComplexityWorkshop track - ICLR 2016
REFERENCES
URL https://github.com/nagadomi/kaggle-cifar10-torch7.
Srimat Chakradhar, Murugan Sankaradas, Venkata Jakkula, and Srihari Cadambi. A dynamically configurable
coprocessor for convolutional neural networks. In ACM SIGARCH Computer Architecture News, volume 38,
pp. 247–257. ACM, 2010.
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang,
and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed
systems. arXiv preprint arXiv:1512.01274, 2015.
Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. Diannao:
A small-footprint high-throughput accelerator for ubiquitous machine-learning. In ACM SIGPLAN Notices,
volume 49, pp. 269–284. ACM, 2014a.
Yu-Hsin Chen, Tushar Krishna, Joel Emer, and Vivienne Sze. Eyeriss: An energy-efficient reconfigurable
accelerator for deep convolutional neural networks. In ISSCC. IEEE, 2016.
Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu,
Ninghui Sun, et al. Dadiannao: A machine-learning supercomputer. In MICRO, pp. 609–622. IEEE, 2014b.
Zidong Du, Robert Fasthuber, Tianshi Chen, Paolo Ienne, Ling Li, Tao Luo, Xiaobing Feng, Yunji Chen, and
Olivier Temam. Shidiannao: shifting vision processing closer to the sensor. In Proceedings of the 42nd
Annual International Symposium on Computer Architecture, pp. 92–104. ACM, 2015.
Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object
detection and semantic segmentation. In CVPR, pp. 580–587. IEEE, 2014.
Song Han, Jeff Pool, John Tran, and William J Dally. Learning both weights and connections for efficient
neural networks. arXiv preprint arXiv:1506.02626, 2015.
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. Eie:
Efficient inference engine on compressed deep neural network. arXiv preprint arXiv:1602.01528, 2016a.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with
pruning, trained quantization and huffman coding. ICLR, 2016b.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv
preprint arXiv:1512.03385, 2015.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
Jiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang, Ningyi Xu,
Sen Song, Yu Wang, and Huazhong Yang. Going deeper with embedded fpga platform with convolutional
neural network. In ACM Symposium on Field Programmable Gate Array (FPGA), pp. 1–12. ACM, 2016.
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with
region proposal networks. arXiv preprint arXiv:1506.01497, 2015.
Jaehyeong Sim, Jun-Seok Park, Minhye Kim, Dongmyung Bae, Yeongjae Choi, and Lee-Sup Kim. A 1.42top-
s/w deep convolutional neural network recognition processor for intelligent ioe systems. In ISSCC. IEEE,
2016.
Chen Zhang, Peng Li, Guangyu Sun, Yijin Guan, Bingjun Xiao, and Jason Cong. Optimizing fpga-based
accelerator design for deep convolutional neural networks. In Proceedings of ISFPGA, pp. 161–170. ACM,
2015.
4
| success |
|
91EowxONgIkRlNvXUVog | Lookahead Convolution Layer for Unidirectional Recurrent Neural Networks | [
"Chong Wang",
"Dani Yogatama",
"Adam Coates",
"Tony Han",
"Awni Hannun",
"Bo Xiao"
] | Recurrent neural networks (RNNs) have been shown to be very effective for many
sequential prediction problems such as speech recognition, machine translation, part-of-speech tagging, and others.
The best variant is typically a bidirectional RNN that learns
representation for a sequence by performing a forward and a backward pass through the entire sequence.
However, unlike unidirectional RNNs, bidirectional RNNs
are challenging to deploy in an online and low-latency setting (e.g., in a speech recognition system),
because they need to see an entire sequence before making a prediction.
We introduce a lookahead convolution layer that incorporates information from future subsequences
in a computationally efficient manner to improve unidirectional recurrent neural networks.
We evaluate our method on speech recognition tasks for two languages---English and Chinese.
Our experiments show that the proposed method outperforms vanilla unidirectional
RNNs and is competitive with bidirectional RNNs in terms of character and word error rates. | [
"lookahead convolution layer",
"entire sequence",
"bidirectional rnns",
"convolution layer",
"rnns",
"effective",
"speech recognition"
] | https://openreview.net/pdf?id=91EowxONgIkRlNvXUVog | https://openreview.net/forum?id=91EowxONgIkRlNvXUVog | ICLR.cc/2016/workshop | 2016 | {
"note_id": [
"4QygYX3XwhBYD9yOFqMA",
"vlpOAAvVMh7OYLG5inyv",
"K1VM5mjK2C28XMlNCVGP"
],
"note_type": [
"review",
"comment",
"review"
],
"note_created": [
1458068507740,
1458150880676,
1458066643770
],
"note_signatures": [
[
"~Navdeep_Jaitly1"
],
[
"~Dani_Yogatama1"
],
[
"ICLR.cc/2016/workshop/paper/125/reviewer/11"
]
],
"structured_content_str": [
"{\"title\": \"Simple idea, some details are unclear that make it difficult to assess gains.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Instead of computing the output for a unidirectional RNN at time point t using only the hidden state of the layer below at the same time, the paper proposes to use point-wise multiplication and addition from future states of a unidirectional RNN to compute the hidden states at the next layer (this is akin to a separate convolution on each of the feature dimensions). This, the authors argue gets it closer to bidirectional RNNs.\\n\\nThe idea is simple, and the paper seems to show results that there are gains from using the approach, but important details are missing that make it hard to judge whether the gains come from the model or not. \\n\\nSpecifically, the authors say on page that \\\"The next five layers are either all unidirectional (forward) or all bidirectional recurrent layers\\\" From this I would assume that row 1 of the paper is all unidirectional, and row 3 is all bidirectional, while row 2 is all unidirectional, except for the last layer which is a \\\"look-ahead convolution\\\" If that's the case the results are good. \\n\\nHowever, the next lines \\\"We also compare with two baselines constructed by replacing the second-to-last layer with either a unidirectional recurrent layer or bidirectional recurrent layer\\\", make we wonder if this is really the case; the statement leaves open the possibility that Row 1 is bidirectional all the way, and then unidirectional, followed by the softmax, while Row 2 is bidirectional all the way and then a look-ahead convolutional layer etc... this result would be less convincing since it does get bidirectional inputs to the top layer..\\n\\nAn obvious comparison would have been unidirectional all the way, look-ahead convolutional all the way and bidirectional all the way. I'm surprised this isn't the one that is offered. And if it is indeed the one that is offered, the paper should writh the model section in such a way that its clearer.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to review 12\", \"comment\": \"Thank you for your helpful comments.\\nWe would like to clarify that for the results in Table 1, row 1 is all unidirectional, row 3 is all bidirectional, and row 2 is all unidirectional except for the last layer.\\nThank you for your suggestion of an additional type of networks (all lookahead convolution), we will consider this.\\nWe note that this network architecture would introduce additional delays for deep networks and large tau compared to our proposed architecture (all unidirectional except for the last layer).\\nSince each lookahead layer needs to wait tau steps, we can only compute the first output after waiting tau + (tau-1)(depth-2) steps (instead of tau steps).\"}",
"{\"title\": \"Simple and useful concept, clear writeup, limited experiments\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"A clear description of the so called \\\"convolutional lookahead\\\" for RNNs in order to incorporate small windows of future context information in a similar fashion to bidirectional RNN, but in a way amenable to streaming decode.\\n\\nThe primary drawback of this paper is the limited experimental section - it would have been great to see more comparison over various settings of `tau`, ideally showing convergence to the full bidirectional RNN solution with larger and larger settings - the authors mention other experiments (\\\"increasing future context size did not close the gap\\\", conclusion), but fail to show them here. One other experiment of interest would be to see the performance limitations of only using the convolutional lookahead, either by making the network use a single recurrent layer (bidirectional vs lookahead) in a fashion similar to Deep Speech 1, or making *all* layers use convolutional lookahead. Also showing the experiments in which \\\"using a regular\\nconvolution layer with multiple filters resulted in poor performance\\\" due to overfitting would be useful - perhaps the gap between them is due to capacity limitations in the lookahead?\\n\\nAdditionally, the paper mentions \\\"We note that much better performance can be\\nobtained for both datasets by using a more powerful language model or more training data. We have\\nobserved that in both cases the improvements from the lookahead convolution layer are consistent\\nwith the smaller scale experiments shown here.\\\" - it would be good to actually *see* these experiments in a table or description, rather than an offhand comment.\\n\\nMore experiments are always interesting, and actually showing the experiments mentioned in passing in the text would be even better, but the paper as it stands is already a \\\"minimum viable paper\\\" for workshop purposes. It clearly displays a particular technique, its uses, and some drawbacks and performance issues in application. Some of the mentioned experiments above are also described as \\\"future work\\\", so it is clear the authors know these are interesting directions of exploration, and ideally a subset of those results can make the workshop paper.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} | Workshop track - ICLR 2016
LOOKAHEAD CONVOLUTION LAYER FOR UNIDIREC-
TIONAL RECURRENT NEURAL NETWORKS
Chong Wang∗, Dani Yogatama∗, Adam Coates, Tony Han, Awni Hannun, Bo Xiao
Baidu Research, Silicon Valley Artificial Intelligence Lab
Sunnyvale, CA 94089, USA
Contact: [email protected]
ABSTRACT
Recurrent neural networks (RNNs) have been shown to be very effective for many
sequential prediction problems such as speech recognition, machine translation,
part-of-speech tagging, and others. The best variant is typically a bidirectional
RNN that learns representation for a sequence by performing a forward and a
backward pass through the entire sequence. However, unlike unidirectional RNNs,
bidirectional RNNs are challenging to deploy in an online and low-latency setting
(e.g., in a speech recognition system), because they need to see an entire sequence
before making a prediction. We introduce a lookahead convolution layer that
incorporates information from future subsequences in a computationally efficient
manner to improve unidirectional recurrent neural networks. We evaluate our
method on speech recognition tasks for two languages—English and Chinese. Our
experiments show that the proposed method outperforms vanilla unidirectional
RNNs and is competitive with bidirectional RNNs in terms of character and word
error rates.
1
INTRODUCTION
We are interested in sequential prediction problems, where given an input x1:T = {x1, x2, . . . , xT },
the goal is to make a prediction y1:T = {y1, y2, . . . , yT }.1 In this paper, we will refer to t = 1, . . . , T
as timesteps. Many real-world tasks can be formulated as sequential prediction problems. For
example, in speech recognition (language modeling), we are given a spectrogram of power normalized
audio clips (a word) at every timestep and predict the character or phoneme (the next word) associated
with this input.
Recurrent neural networks (RNNs) are a powerful class of models for sequential prediction problems
(Mikolov et al., 2010; Sutskever et al., 2014; Amodei et al., 2015; inter alia). There are two general
types of RNNs: unidirectional and bidirectional RNNs. Bidirectional RNNs tend to perform better
since they incorporate information from future timesteps when making a prediction at timestep
t. For bidirectional RNNs, in the forward pass we compute pt = f (bp + Uppt−1 + Vpxt),
where bp, Up, and Vp are model parameters. Similarly, in the backward pass, we compute qt =
f (bq + Uqqt+1 + Vqxt). The output at timestep t is then computed as yt = g(W[pt, qt]), where
[·] denotes the vector concatenation operator. For unidirectional RNNs, only the forward pass is
performed, so the output at timestep t is yt = g(Wpt). We only consider vanilla recurrent layers in
this work, but our technique is compatible with more sophisticated recurrent layers such as LSTMs
(Hochreiter & Schmidhuber, 1997) and GRUs (Cho et al., 2014) as well.
Bidirectional RNNs generally achieve better performance since they can incorporate future context,
but they come with additional computational costs, both for training and decoding. While an increase
in training time is not always an issue (since the training procedure can be carried out offline), an
increase in decoding time is a significant issue for a production system that needs to operate in an
online, low-latency setting, As can be seen from the equations above, bidirectional RNNs need to wait
∗Equal contribution.
1 We use lower case letters to denote variables, bold lower case letters to denote vectors, and bold upper case
letters to denote matrices.
1
Workshop track - ICLR 2016
for an entire sequence to be seen before making a prediction for timestep t. Unidirectional RNNs, on
the other hand, allow decoding in a streaming fashion since they only incorporate previous context.
In this paper, we investigate a computationally efficient way to incorporate information from future
timesteps (context) using a new convolution layer. Our goal is to design a method that achieves
comparable performance to bidirectional RNNs and still supports online decoding. We show how we
can modify a convolutional layer to achieve this purpose in the followings. Our experiments show that
our proposed method outperforms vanilla unidirectional RNNs and is competitive with bidirectional
RNNs in terms of character and word error rates. This work incorporates new comparisons and
discussion not reported in Amodei et al. (2015).
Figure 1: Lookahead convolution architecture with future context size τ = 2.
2 LOOKAHEAD CONVOLUTION
We propose a convolution layer which we call lookahead convolution, shown in Figure 1. The
intuition behind this layer is that we only need a small portion of future information to make an
accurate prediction at the current timestep. Suppose at timestep t, we use a future context of τ steps.
We now have a feature matrix Xt:t+τ = [xt, xt+1, ..., xt+τ ] ∈ Rd×(τ +1). We define a parameter
matrix W ∈ Rd×(τ +1). The activations ht for the new layer at time-step t are
ht =
τ +1
(cid:88)
j=1
wj (cid:12) xt+j−1,
where (cid:12) denotes an element-wise product. The output at timestep t is then computed as g(ht), for a
non-linear function g. We note that the convolution-like operation is row oriented for both W and
Xt:t+τ .
3 EXPERIMENTS
We evaluate our method on speech recognition tasks for two languages: English and Chinese.
Model Our speech recognition system is based on the DeepSpeech system (Amodei et al., 2015).
It is a character-level deep recurrent neural network model that takes speech spectrograms as an
input and predicts characters at every timestep. Our neural network architecture in these experiments
consists of eight layers. The first layer is a regular convolution layer. The next five layers are either all
unidirectional (forward) or all bidirectional recurrent layers. The second-to-last layer is the lookahead
convolution layer. We also compare with two baselines constructed by replacing the second-to-last
layer with either a unidirectional recurrent layer or a bidirectional recurrent layer. The last layer is a
softmax layer over character outputs. We train the model using the CTC loss function (Graves et al.,
2006). See Amodei et al. (2015) for details of the architecture and training procedure.
Datasets We use the Wall Street Journal corpus2 for our English experiment and an internal Baidu
speech corpus for our Chinese experiment. The WSJ (Baidu) speech corpus consists of approximately
80 (800) hours of training data and 503 (2000) test utterances.
2https://catalog.ldc.upenn.edu/LDC93S6A
2
xtxt+1xt+2xt+3ht+3ht+2ht+1htWorkshop track - ICLR 2016
Table 1: Word error rates (English) and character error rates (Chinese) for competing models. We
use future context size τ = 20 in all our experiments.
Model
Forward RNN
Forward RNN + lookahead-conv
Bidirectional RNN
English
Chinese
No LM Small LM No LM Small LM
15.71
13.45
12.76
25.86
21.32
20.46
18.79
16.77
15.42
23.13
22.66
19.47
Results Table 1 shows the results for English and Chinese speech recognition. Since our focus
is on evaluating the performance of the lookahead convolution layer, we report results without any
language model and with a small language model. We note that much better performance can be
obtained for both datasets by using a more powerful language model or more training data. We have
observed that in both cases the improvements from the lookahead convolution layer are consistent
with the smaller scale experiments shown here.
4 DISCUSSION
We showed that the lookahead convolution layer improves unidirectional RNNs for speech recognition
on English and Chinese in terms of word and character error rates. We place the lookahead convolution
layer above all (unidirectional) recurrent layers. The advantages are twofold. First, this allows us
to stream all computations below the lookahead convolution layer. For the lookahead convolution
layer, to get an output at timestep t, we only need the input up to t + τ . Second, this results in better
performance in our experiments. We conjecture that the recurrent layers have learned good feature
representations, so the lookahead convolution layer simply gathers the appropriate future information
to feed to the classifier.
We note that there is still a small performance gap between bidirectional RNNs and unidirectional
RNNs with lookahead convolution. In our preliminary experiments, we found that increasing future
context size did not close this gap. We also found that incorporating future context using a regular
convolution layer with multiple filters resulted in poor performance. We obeserved that the resulting
model overfit the training data, even after an extensive tuning of the layer hyperparameters. A
regular convolution layer also has higher computational complexity than the lookahead convolution
layer (although the latency is still lower than a bidirectional recurrent layer). We plan to run more
experiments with different future context size and for other sequential prediction tasks to evaluate the
effectiveness of the proposed method.
REFERENCES
D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski,
A. Coates, G. Diamos, E. Elsen, J. Engel, L. Fan, C. Fougner, T. Han, A. Hannun, B. Jun,
P. LeGresley, L. Lin, S. Narang, A. Ng, S. Ozair, R. Prenger, J. Raiman, S. Satheesh, D. Seetapun,
S. Sengupta, Y. Wang, Z. Wang, C. Wang, B. Xiao, D. Yogatama, J. Zhan, and Z. Zhu. Deep
speech 2: End-to-end speech recognition in english and mandarin. ArXiv e-prints, 2015.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder–decoder for
statistical machine translation. In Proc. of EMNLP, 2014.
Alex Graves, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. Connectionist temporal
classification: Labelling unsegmented sequence data with recurrent neural networks. In Proc. of
ICML, 2006.
Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):
1735–1780, 1997.
Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan ”Honza” Cernocky, and Sanjeev Khudanpur.
Recurrent neural network based language model. In Proc. of Interspeech, 2010.
3
Workshop track - ICLR 2016
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks.
In Proc. of NIPS, 2014.
4
| success |
|
jZ9WrEWPmsnlBG2XfGLl | Coverage-based Neural Machine Translation | [
"Zhaopeng Tu",
"Zhengdong Lu",
"Yang Liu",
"Xiaohua Liu",
"Hang Li"
] | Attention mechanism advanced state-of-the-art neural machine translation (NMT) by jointly learning to align and translate. However, attentional NMT ignores past alignment information, which leads to over-translation and under-translation problems. In response to this problem, we maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust the future attention, which guides NMT to pay more attention to the untranslated source words. Experiments show that coverage-based NMT significantly improves both translation and alignment qualities over NMT without coverage. | [
"nmt",
"neural machine translation",
"coverage vector",
"problems",
"response",
"problem",
"track",
"attention history"
] | https://openreview.net/pdf?id=jZ9WrEWPmsnlBG2XfGLl | https://openreview.net/forum?id=jZ9WrEWPmsnlBG2XfGLl | ICLR.cc/2016/workshop | 2016 | {
"note_id": [
"ROVpzJnpYivnM0J1Ipq5",
"K1VMqRGvJu28XMlNCVoV"
],
"note_type": [
"review",
"review"
],
"note_created": [
1457651522538,
1458138602048
],
"note_signatures": [
[
"ICLR.cc/2016/workshop/paper/15/reviewer/11"
],
[
"ICLR.cc/2016/workshop/paper/15/reviewer/12"
]
],
"structured_content_str": [
"{\"title\": \"interesting ideas and results for neural MT, but very difficult to understand and follow. I suspect this is due to hasty and overly-aggressive compression of the original paper to this 3-page format.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper is about introducing a notion of (soft) source-side coverage into neural MT models. The idea makes sense and is shown to produce reasonable gains in BLEU.\\n\\nThis version of the paper is extremely difficult to understand. I had to google for the uncompressed arxiv version to get any kind of confidence that I understood the paper. I would recommend that people read the original paper -- it is quite interesting. I think the authors should have tried to actually write an extended abstract that conveys the key points, rather than trying to fit the entire formal description of their approach into the 3-page format.\", \"below_are_just_a_few_of_the_notational_issues_in_this_version\": \"$auxs$ --> maybe $\\\\psi$?\\n\\n\\\\alpha_{i,j} in Eq. (1) is not defined.\\n\\n\\\\phi(h_j) is used in the equation, but then \\\\phi_i(h_j) is defined immediately thereafter. Which should it be?\\n\\nWhat is the \\\"decoding state\\\" s_i? This is not defined in the paper.\\n\\nThe equation in Section 2.1 uses \\\\alpha_{i,j} but below that the authors write \\\"Here we only employ \\\\alpha_{i-1}...\\\" -- this seems to be a mismatch. Or if it's not a mismatch, I don't understand what it means.\\n\\n\\nThere is also no description of the experimental setup -- only some tables and plots are shown. I think the work is interesting and compelling but I am hesitant to recommend acceptance of this paper as an ICLR workshop paper. I would prefer that the authors submit this as a conference paper to another venue, like ACL, CoNLL, EMNLP, or COLING. This paper would be a good fit for one of these NLP conferences.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"In principle promising idea to add coverage information to an attention-based neural MT system, but paper is uncomprehensible as it is\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper extends attention-based neural machine translation (NNMT) with a coverage model. In the standard attention model, for each target word a subset of relevant source words are \\\"selected\\\" as context vector. In principle, it can happen that source words are used several times or not all. Therefore, the introduction of the notion of source word coverage in an NNMT attention model is an interesting idea (a coverage model is used in standard PB SMT).\\n\\nI don't agree with the statement that learning the coverage information by back-prop is potentially weak and one should add \\\"linguistic information\\\". In that case, one could question the whole idea to do NNMT - in such a model every \\\"decision\\\" is purely statistical without any linguistic information.\\n\\nThe description of the used coverage model itself is very complicated to understand. Given the space constraints, it seems a bad idea to first present a general model (Eqn 1) and than to use a much simpler one, which is insufficiently explained. I wasn't able to understand how the coverage model was calculated, how the fertility probabilities were obtained (Eqn 2+3), etc.\\n\\nFinally, the results are not analyzed - the authors just provide two figures and a table. There is no information on what data the system was trained on nor the actual language pair !! Also, I'm surprised that the BLEU score of the NNMT system decreases substantially with the length of the sentences (Figure 1 left). This is in contrast to results by Bahdanau et al. who show that the attention model does prevent this decrease (plot RNN search-50 in Figure 2 in their paper) ! This raises some doubts on the experimental results ...\\n\\nwhy the attention coverage vector beta is uniformly initialized ? I expect it to be zero (nothing covered)\\n - you use notation without defining it, e.g.\\n - what is d in \\\".. is a vector (d>1) ..\\\"\\n - s_i and h_j ; a small figure would be very helpful !\\n\\nSeveral sentences are difficult to understand and I spotted a couple of stupid errors (e.g. \\\"predefined constantto denoting\\\"). Please proof-read the paper !\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} | Workshop track - ICLR 2016
COVERAGE-BASED NEURAL MACHINE TRANSLATION
Zhaopeng Tu† Zhengdong Lu† Yang Liu‡ Xiaohua Liu† Hang Li†
†Huawei Noah’s Ark Lab, Hong Kong
{tu.zhaopeng,lu.zhengdong,liuxiaohua3,hangli.hl}@huawei.com
‡Department of Computer Science and Technology, Tsinghua University, Beijing
[email protected]
ABSTRACT
Attention mechanism advanced state-of-the-art neural machine translation (NMT)
by jointly learning to align and translate. However, attentional NMT ignores past
alignment information, which leads to over-translation and under-translation prob-
lems. In response to this problem, we maintain a coverage vector to keep track of
the attention history. The coverage vector is fed to the attention model to help ad-
just the future attention, which guides NMT to pay more attention to the untrans-
lated source words. Experiments show that coverage-based NMT significantly
improves both translation and alignment qualities over NMT without coverage.
1
INTRODUCTION
The past several years have witnessed the rapid development of end-to-end neural machine transla-
tion (NMT) (Kalchbrenner & Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). Unlike
conventional statistical machine translation (SMT) (Brown et al., 1993; Koehn et al., 2003), NMT
proposes to use a single, large neural network instead of latent structures to model the translation
process. However, a serious problem with NMT is the lack of coverage. In SMT, a decoder main-
tains a coverage vector to indicate whether a source word is translated or not. This is important for
ensuring that each source word is translated exactly in decoding. The decoding process is completed
when all source words are translated. In NMT, there is no such coverage vector and the decoding
process ends only when the end-of-sentence tag is produced. We believe that lacking coverage might
result in following problems in NMT:
1. Over-translation: some words are unnecessarily translated for multiple times;
2. Under-translation: some words are wrongly untranslated.
In this work, we propose a coverage-based approach to NMT to alleviate the over-translation and
under-translation problems. Basically, we append annotation vectors to the intermediate represen-
tation of NMT models, which is updated after each attentive read during the decoding process to
keep track of the attention history. Those annotation vectors, when entering into attention model,
can help adjust the future attention and significantly improve the alignment between source and tar-
get. This design potentially contains many particular cases for coverage modeling with contrasting
characteristics, which all share a clear linguistic intuition and yet can be trained in a data driven
fashion. Notably, in a simple and effective case, we achieve by far the best performance by re-
defining the concept of fertility, as a successful example of re-introducing linguistic knowledge into
neural network-based NLP models. Experiments on large-scale Chinese-English datasets show that
our coverage-based NMT system outperforms conventional attentional NMT significantly on both
translation and alignment tasks.
2 COVERAGE MODEL FOR NMT
In SMT, a coverage set is maintained to keep track of which source words have been translated
(“covered”) in the past. Take an input sentence x = {x1, x2, x3, x4} as an example, the initial
coverage set is C = {0, 0, 0, 0} which denotes no source word is yet translated. When a translation
1
Workshop track - ICLR 2016
rule is used to translate {x2, x3}, we produce one hypothesis labelled with coverage C = {0, 1, 1, 0}.
It means that the second and third source words are translated. The goal is to generate translation
with full coverage C = {1, 1, 1, 1}. A source word is translated when it is covered by one translation
rule, and it is not allowed to be translated again in the future. In this way, each source word is
guaranteed to be translated and only be translated once. As shown, coverage is essential for SMT
since it avoids gaps and overlap when translating source words.
For NMT, directly modeling coverage is less straightforward, but the problem can be significantly
alleviated by keeping track of the attention signal during the decoding process. The most natural way
for doing that is to append an annotation vector βj to each hj ( the input annotation of the jth source
word), which is uniformly initialized but updated after every attentive read of the corresponding
hidden state. This annotation vector will enter the soft attention model for alignment. Intuitively,
at each time step i in the decoding phase, coverage from time step (i − 1) serves as input to the
attention model, which provides complementary information of that how likely the source words are
translated in the past. Since βi−1,j summarizes the attention record for hj, it will discourage further
attention to it if it has been heavily attended, and implicitly push the attention to the less attended
segments of the source sentence since the attention weights are normalized to one. This could
potentially solve both coverage mistakes mentioned above, when modeled and learned properly.
Formally, the coverage model is given by
βi,j = gupdate
(cid:0)βi−1,j, αi,j, Φ(hj), Ψ(cid:1)
(1)
where gupdate(·) is the function that updates βi,j after the new attention at time step i, βi,j is a
d-dimensional annotation vector summarizing the history of attention till time step i on hj, Φi(hj)
is a word-specific feature with its own parameters, and Ψ are auxiliary inputs exploited in different
sorts of coverage models.
Equation 1 gives a rather general model, which could take different function forms for gupdate(·)
and Φ(·), and different auxiliary inputs auxs (e.g. previous decoding state si−1). In the rest of this
section, we will give a number of representative implementations of the annotation model, which
either resort to the flexibility of neural network function approximation (Section 2.1) or bear more
linguistic intuition (Section 2.2).
2.1 NEURAL NETWORK-BASED COVERAGE MODEL
When βj is a vector (d > 1) and gupdate(·) takes a neural network (NN) form, we actually have a
recurrent neural network (RNN) model for annotation.In our work, we take the following form
βi,j = f (βi−1,j, αi,j, hj, si−1)
where the activation function f (·) is a gated recurrent unit (GRU) (Cho et al., 2014) and si−1 is
the auxiliary input that encodes past translation information. Note that we leave out the the word-
specific feature function Φ(·) and only take hj as the input to the annotation RNN. It is important to
emphasize that the NN-based annotation is able to be fed with arbitrary inputs, such as the previous
attentional context ci−1. Here we only employ αi−1 for past alignment information, si−1 for past
translation information, and hj for word-specific bias.
Although the NN-based model enjoys the flexibility brought by the nonlinear form, its lack of clear
linguistic meaning may render it hard to train:
the annotation model can only be trained along
with the attention model and get the supervision signal from it in back-propagation, which could be
weak (relatively distant from the decoding process) and noisy (after the distortion from other under-
trained components in the decoder RNN). Partially to overcome this problem, we also propose the
linguistically inspired model which has much clearer interpretation but much less parameters.
2.2 LINGUISTIC COVERAGE MODEL
While linguistically-inspired coverage in NMT is similar in spirit to that in SMT, there is one key
difference: it indicates what percentage of source words have been translated (i.e. soft coverage). In
NMT, each target word yi is generated from all source words with probabilities αi,j for source word
xj. In other words, each source word xj involves in generating all target words and generates αi,j
target word at time step i. Note that unlike in SMT where each source word is not fully translated at
2
Workshop track - ICLR 2016
Table 1: Evaluation of translation and alignment qualities. Higher score means better translation
quality, while lower score means better alignment quality. Linguistic coverage overall outperforms
its NN-based counterpart on both translation and alignment tasks, indicating that explicitly linguistic
regularities are very important to the attention model.
System
Moses
NMT (Bahdanau et al., 2015)
NMT + NN-based coverage
NMT + Linguistic coverage
Translation Alignment
28.41
26.20
27.14
27.70
–
56.78
56.17
54.91
Figure 1: Performance of the generated translations on the test set with respect to the lengths of the
input sentences. Coverage-based NMT alleviates the problem of under-translation on long sentences
by producing longer translations, leading to better translation performances.
one decoding step, xj is partially translated at each decoding step in NMT . Therefore, the coverage
at time step i denotes the translated ratio of that each source word is translated.
We use a scalar (d = 1) to represent linguistic coverages for each source word and employ an
accumulate operation for gupdate. We iteratively construct linguistic coverages through an accumu-
lation of alignment probabilities generated by the attention model, each of which is normalized by a
distinct context-dependent weight. The coverage of source word xj at time step i is computed by
βi,j =
1
Φj
i
(cid:88)
k=1
αk,j
(2)
where Φj is a pre-defined weight which indicates the number of target words xj is expected to
generate. To predict Φj, we introduce the concept of fertility, which is firstly proposed in word-level
SMT (Brown et al., 1993). Fertility of source word xj tells how many target words xj produces. In
this work, we simplify and adapt fertility from the original model1 and compute the fertility Φj by
Φj = N (xj|x) = N (hj) = N · σ(Uf hj)
where N ∈ R is a predefined constantto denoting the maximum number of target words one source
word can produce, σ(·) is a logistic sigmoid function, and Uf ∈ R1×2n is the weight matrix. Here
we use hj to denote (xj|x) since hj contains information about the whole input sentence with a
strong focus on the parts surrounding xj (Bahdanau et al., 2015). Since Φj does not depend on i,
we can pre-compute it before decoding to minimize the computational cost.
(3)
1Fertility in SMT is a random variable with a set of fertility probabilities, n(Φj|xj) = p(Φj−1
, x), which
depends on the fertilities of previous source words. To simplify the calculation and adapt it to the attention
model in NMT, we define the fertility in NMT as a constant number, which is independent of previous fertilities.
1
3
Workshop track - ICLR 2016
REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly
learning to align and translate. ICLR 2015, 2015.
Peter E. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. The math-
ematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19
(2):263–311, 1993.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and
Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical ma-
chine translation. In EMNLP 2014, 2014.
Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In EMNLP 2013,
2013.
Philipp Koehn, Franz Josef Och, and Daniel Marcu. Statistical phrase-based translation. In NAACL
2003, 2003.
Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural net-
works. In NIPS 2014, 2014.
4
| success |
|
xnrA4qzmPu1m7RyVi38Z | CMA-ES for Hyperparameter Optimization of Deep Neural Networks | [
"Ilya Loshchilov",
"Frank Hutter"
] | Hyperparameters of deep neural networks are often optimized by grid search, random search or Bayesian optimization.
As an alternative, we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known for its state-of-the-art performance in derivative-free optimization. CMA-ES has some useful invariance properties and is friendly to parallel evaluations of solutions. We provide a toy usage example using CMA-ES to tune hyperparameters of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel. | [
"hyperparameter optimization",
"deep neural networks",
"grid search",
"random search",
"bayesian optimization",
"alternative",
"performance",
"optimization"
] | https://openreview.net/pdf?id=xnrA4qzmPu1m7RyVi38Z | https://openreview.net/forum?id=xnrA4qzmPu1m7RyVi38Z | ICLR.cc/2016/workshop | 2016 | {
"note_id": [
"ZYE6lW1Aki5Pk8ELfENW",
"6XAk3KEykUrVp0EvsEBg",
"MwVMBoZJzfqxwkg1t71j",
"yovRVMVJMur682gwszPD",
"gZWJBXDkPiAPowrRUAKL",
"MwVMZzRBDSqxwkg1t71M",
"vl62XZ46AH7OYLG5in8k",
"p8jOo5YAKcnQVOGWfpk7",
"NLokZ1m68u0VOPA8ixVA",
"4Qyg1WpYnFBYD9yOFqjP"
],
"note_type": [
"comment",
"comment",
"review",
"comment",
"comment",
"review",
"comment",
"review",
"comment",
"comment"
],
"note_created": [
1458842164513,
1458164271298,
1457916673782,
1458164110273,
1458842730422,
1457880707142,
1458842074879,
1458251695755,
1458842754345,
1458163949599
],
"note_signatures": [
[
"~Ilya_Loshchilov1"
],
[
"~Ilya_Loshchilov1"
],
[
"ICLR.cc/2016/workshop/paper/126/reviewer/11"
],
[
"~Ilya_Loshchilov1"
],
[
"~Ilya_Loshchilov1"
],
[
"ICLR.cc/2016/workshop/paper/126/reviewer/10"
],
[
"~Ilya_Loshchilov1"
],
[
"ICLR.cc/2016/workshop/paper/126/reviewer/12"
],
[
"~Ilya_Loshchilov1"
],
[
"~Ilya_Loshchilov1"
]
],
"structured_content_str": [
"{\"title\": \"Dear Reviewer\", \"comment\": \"Dear Reviewer, just in case you missed our reply due to the lack of notifications in OpenReview, we addressed your questions and comments here: http://beta.openreview.net/forum?id=xnrA4qzmPu1m7RyVi38Z Thanks!\"}",
"{\"title\": \"This is a reply to the reviews by the authors. Part 3/3.\", \"comment\": \"Reviewer 2:\\nThe suggestion of using priors over the search space within Bayesian optimization seems very sensible. Note that, Scalable Bayesian Optimization using Deep Networks does exactly this (using a prior mean function as a quadratic bowl centered in the middle of the space). That is in a way analogous to the setup for CMA-ES here (starting with a Gaussian spray of points centered in the middle of the space).\\n\\nThe initialization seems like a major possible source of bias. One might worry that the bounds are setup with the optimum near the center, which would favor the approach that starts with random points at the center. It would be useful to experimentally validate this by starting the Bayesian optimization approaches at the center as well.\", \"authors\": \"We agree that the role of priors is important (in fact, we emphasized that in our original text \\\"This might be because of a bias towards the middle of the range\\\"). We have also added results for TPE, with the same priors, and the priors certainly help. However, Figure 3 in the supplementary material clearly shows that the best solutions most of the time do not lie in the middle of the search range (see, e.g., $x_3, x_6, x_9, x_{12}, x_{13}, x_{18}$). \\n\\n\\nWe thank the reviewers for considering this very different approach for hyperparameter optimization. While we clearly do not believe it to be the answer to all problems, its strengths appear to nicely complement those of existing methods.\"}",
"{\"title\": \"Potentially very useful algorithm proposed for hyperparameter tuning, although it has been proposed before. Promising results but requires more thorough experiments.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Summary:\\nThis paper investigates the use of the CMA-ES algorithm for embarrassingly parallel hyperparameter optimization of continuous (or integer) hyperparameters in deep learning models, specifically convolutional networks. The experiments show that this method can potentially outperform GP-based hyperparameter optimization, although more experiments are needed to draw any solid conclusions. As one example, I think it would be worth investigating how well random search does on this problem as a baseline. For the smaller 5-minute problem at least, the methods should be run multiple times to get error bars.\\n\\nAs far as I know CMA-ES searches locally, which could be a cause for concern if the function is multi-modal. I think that running CMA-ES with a few different initial distributions would be helpful to show whether it is robust to this effect.\", \"novelty\": \"The idea of applying CMA-ES for hyperparameter optimization is not necessarily novel (CMA-ES was used to tune speech recognition models in [1], for example), but the idea is simple enough and potentially practical enough that it is worth investigating for deep learning. A reference to [1] should be added.\", \"clarity\": \"The paper is well written overall, but there is very little information on the CMA-ES algorithm used in the experiments. I recommend adding an algorithm box outlining the CMA-ES approach used in the paper. There are also some non-standard hyperparameters, such as the batch selection, that should be briefly explained.\\n\\nI\\u2019m not sure if the claim that there is no way to truly parallelize SMBO is true. For example, there is the q-EI acquisition function in [2].\\n\\nFrom the experiments, there is a table of transformations that were applied to the hyperparameters. Were these transformations used for all of the methods? Hopefully yes since that would otherwise have a drastic effect on the results.\\n\\nIt would also be really helpful to see what the best hyperparameters are, particularly if they are near the center of the search space.\", \"significance\": \"The nice thing about this paper is that it could result in a very simple and practical methodology. At the moment there are still several open questions, but if the conclusions hold up to more intense scrutiny then it could be very significant.\", \"quality\": \"This paper is a straightforward application of a simple algorithm to a difficult problem and is of sufficient quality for a workshop paper.\", \"pros\": [\"Simple application of a well known algorithm to a practical problem\", \"Results show a lot of promise and merit further investigation\"], \"cons\": [\"Needs more experiments before any conclusions can be drawn\", \"The paper is light on details of the design choices in the experiments\", \"CMA-ES should be more thoroughly described\"], \"references\": \"[1] Watanabe, S. and Le Roux, J. Black box optimization for automatic speech recognition. MITSUBISHI ELECTRIC RESEARCH LABORATORIES TR2014-021, May 2014\\n\\n[2] M. Schonlau. Computer Experiments and global optimization. PhD Thesis. University of Waterloo, 1997\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This is a reply to the reviews by the authors. Part 2/3.\", \"comment\": \"Reviewer 1: It would also be really helpful to see what the best hyperparameters are, particularly if they are near the center of the search space.\", \"authors\": \"We found that divergence of the network is very rare, so no special measures were taken. The detailed distribution of evaluation qualities is given in Figure 4 of the supplementary material.\", \"reviewer_2\": \"In particular, the comparison is likely conflated by discontinuities in the optimization surface. It seems reasonable to compare to approaches that take this into account and for which implementations are provided in the same package as the authors ran (e.g. PESC).\\nOne concern is discontinuities in the objective function, which could be caused by having the neural net being trained diverge. Looking at the hyperparameter bounds, it seems reasonable to expect this to happen (e.g. high momentum and high learning rate). Various papers (Gelbart et al., Gardner et al., PESC, Snoek, Gramacy) developed constraints to deal with this issue. Did the model diverge during training and if so, did you consider using the constrained alternatives?\"}",
"{\"title\": \"Response 1/2\", \"comment\": \"Thanks for your comments and suggestions!\\nIn the remaining responses, we'll refer to an updated version of the paper available at\", \"https\": \"//sites.google.com/site/cmaesfordnn/iclr2016___hyperparameters.pdf?attredirects=0&d=1 (anonymously for the visitors)\", \"reviewer\": \"Apart from a more comprehensive experiment coverage, all existing experiments require multiple evaluations and corresponding error bars.\", \"authors\": \"We agree that it is useful to run algorithms several times to get error bars. We\\u2019ll do that in the future. For now, we used our small computational budget to also study multiple problems, running CMA-ES on multiple different problems (see Figure 1 top) to quantify its variation across problems as well. One can see that the results for Adam and Adadelta are quite similar when the same budgets are used. This is also the case for the runs with different time budgets.\"}",
"{\"title\": \"Comparing CMA-ES to Bayesian Optimization is a really great idea, but this needs more careful empirical work to be valuable to the community.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper explores the use of an algorithm from the evolutionary optimization literature as an alternative approach to Bayesian optimization for hyperparameters. In particular, the authors propose the use of CMA-ES for the parallelized hyperparameter optimization of a deep neural network. On one problem, they demonstrate that CMA-ES appears to reach better validation performance than a popular Bayesian optimization method.\\n\\nThis is a well written paper that is easy to follow and offers an interesting datapoint for Bayesian optimization and hyperparameter optimization researchers. One concern, however, is that the empirical evaluation is too light. The authors run a single optimization on just one problem and the experimental setup may have some issues. In particular, the comparison is likely conflated by discontinuities in the optimization surface. It seems reasonable to compare to approaches that take this into account and for which implementations are provided in the same package as the authors ran (e.g. PESC). Also, the reported results on the CIFAR-10 validation set seem too good to be true, which makes one worry about the experimental setup.\\n\\nIn Figure 1, it looks like the GP-based approaches (EI and PES) experience major model fitting issues. This would be suggested by the observation that they don't seem to improve at all after the first few function evaluations. One concern is discontinuities in the objective function, which could be caused by having the neural net being trained diverge. Looking at the hyperparameter bounds, it seems reasonable to expect this to happen (e.g. high momentum and high learning rate). Various papers (Gelbart et al., Gardner et al., PESC, Snoek, Gramacy) developed constraints to deal with this issue. Did the model diverge during training and if so, did you consider using the constrained alternatives?\\n\\nThe CMA curve never seems to sample close to the optimum (i.e. the best values are always extreme outliers). That seems strange. Has it just not converged to the optimum?\\n\\nValidation errors below 0.3% sounds extremely low for CIFAR-10. Typical values currently reported (i.e. state-of-the-art) are around 6% to 8% depending on the type of data augmentation performed.\\n\\nThe suggestion of using priors over the search space within Bayesian optimization seems very sensible. Note that, Scalable Bayesian Optimization using Deep Networks does exactly this (using a prior mean function as a quadratic bowl centered in the middle of the space). That is in a way analagous to the setup for CMA-ES here (starting with a Gaussian spray of points centered in the middle of the space).\\n\\nThe initialization seems like a major possible source of bias. One might worry that the bounds are setup with the optimum near the center, which would favor the approach that starts with random points at the center. It would be useful to experimentally validate this by starting the Bayesian optimization approaches at the center as well.\\n\\nWow, the bounds for selection pressure seem very broad... Does this hyperparameter really vary the objective in an interesting way over a range of 100 orders of magnitude? One might imagine that this could really confound model-based optimization approaches, unless the objective varies smoothly accross this space.\\n\\nIn the introduction, I don't think 'perfect parallelization' seems like a fair statement at all. Random search and grid search offer 'perfect parallelization' but that doesn't not imply that these are better approaches. I highly doubt that CMA-ES uses the parallel experiments more efficiently than other approaches. In fact, one might view (as I do) that the need to distribute a random sample of points in CMA-ES is a major disadvantage. It *has* to parallelize, which seems terribly inefficient.\\n\\nOverall, the idea of comparing to CMA-ES seems like a really great idea, since it is the champion algorithm from the evolutionary optimization field. I think this is a good start, but I am concerned as an empirical study it needs more rigor before it should be accepted. Perhaps the authors can address the above concerns in their next manuscript.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Dear Reviewer\", \"comment\": \"Dear Reviewer, just in case you missed our reply due to the lack of notifications in OpenReview, we addressed your questions and comments here: http://beta.openreview.net/forum?id=xnrA4qzmPu1m7RyVi38Z Thanks!\"}",
"{\"title\": \"An very interesting idea worth exploring further. However, additional experiments are required to evaluate its empirical performance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes using CMA-ES for hyperparameter optimization. An advantage of employing this model is its clear parallelism strategy, which is difficult to achieve in existing approaches. This is an interesting direction of research, as it introduces a new alternative to popular hyperparameter tuning techniques. I am not familiar with CMA-ES, so cannot comment on the novelty of this idea. However, additional experiments are required to validate its empirical success \\u2014 especially once this contribution evolves into a conference track submission.\\n\\nOne aspect that is not clear to me is the tradeoff between \\\"perfect parallelism\\\" and observation efficiency. That is, random search also features perfect parallelism, but past observations don't meaningfully inform future evaluations. \\n\\nCMA-ES is claimed to perform well for larger function budgets, but this seems to be in contrast to the usual (and necessary) assumption of expensive function evaluations. The experiments presented report results for evaluation times of 5-30 minutes, but this is one to two orders of magnitude less than realistic neural network training times.\\n\\nApart from a more comprehensive experiment coverage, all existing experiments require multiple evaluations and corresponding error bars. For example, the experiment on the bottom right seems misleading. The first evaluation by CMA-ES reports a lower error than other approaches are able to ever attain, or attain just prior to convergence. However, this evaluation was done completely at random, so it is not indicative of the performance of this method in general, just of the initialization strategy.\\n\\nIn addition, while we expect CMA-ES to perform well for a large number of observations, GP-based approaches cannot scale to this regime. As such, this approach must be compared to an appropriate baseline (for example, Snoek 2015).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response 2/2\", \"comment\": \"Reviewer: For example, the experiment on the bottom right seems misleading. The first evaluation by CMA-ES reports a lower error than other approaches are able to ever attain, or attain just prior to convergence. However, this evaluation was done completely at random, so it is not indicative of the performance of this method in general, just of the initialization strategy.\", \"authors\": \"The updated version of the paper also involves two variants of TPE and SMAC. We agree that the work of Snoek 2015 (please note that we mention it in our paper) should also be considered. However, our workshop paper does not attempt to make a final conclusion but rather introduces a new tool to the field.\", \"reviewer\": \"In addition, while we expect CMA-ES to perform well for a large number of observations, GP-based approaches cannot scale to this regime. As such, this approach must be compared to an appropriate baseline (for example, Snoek 2015).\"}",
"{\"title\": \"This is a reply to the reviews by the authors. Part 1/3.\", \"comment\": \"This is a reply to the reviews by the authors.\\nThanks for the reviews! We first point out 2 misunderstandings and then reply to the reviewers\\u2019 questions. \\n\\nReviewer 2 got confused, saying that the reported results of 0.3% on the CIFAR-10 validation set seem too good to be true (state of the art is 6% to 8%), which makes one worry about the experimental setup. \\nThat would be true, except that we never mentioned CIFAR-10 in the paper; everything is on MNIST, for which 0.3% is exactly the performance we should be getting.\\nReviewer 2 said CMA-ES \\u201chas to parallelize\\u201d, which is not true: of course, one can do function evaluations sequentially. Indeed, the bottom left figure is for the sequential setting.\\n\\nIn the remaining responses, we\\u2019ll refer to an updated version of the paper available at\", \"https\": \"//sites.google.com/site/cmaesfordnn/iclr2016___hyperparameters.pdf?attredirects=0&d=1 (anonymously for the visitors).\", \"the_webpage_with_this_pdf_is_https\": \"//sites.google.com/site/cmaesfordnn/\\n\\nResponse to Reviewer 1\", \"reviewer_1\": \"From the experiments, there is a table of transformations that were applied to the hyperparameters. Were these transformations used for all of the methods? Hopefully yes since that would otherwise have a drastic effect on the results.\", \"authors\": \"Yes, of course. All algorithms were provided with the same information and all search in [0,1]^19. (We agree that anything else would lead to completely misleading results.)\"}"
]
} | Workshop track - ICLR 2016
CMA-ES FOR HYPERPARAMETER OPTIMIZATION OF
DEEP NEURAL NETWORKS
Ilya Loshchilov & Frank Hutter
Univesity of Freiburg
Freiburg, Germany,
{ilya,fh}@cs.uni-freiburg.de
ABSTRACT
Hyperparameters of deep neural networks are often optimized by grid search, ran-
dom search or Bayesian optimization. As an alternative, we propose to use the
Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known
for its state-of-the-art performance in derivative-free optimization. CMA-ES has
some useful invariance properties and is friendly to parallel evaluations of solu-
tions. We provide a toy usage example using CMA-ES to tune hyperparameters
of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel.
Hyperparameters of deep neural networks (DNNs) are often optimized by grid search, random
search (Bergstra & Bengio, 2012) or Bayesian optimization (Snoek et al., 2012a; 2015), with the
latter known as the most effective method. For the optimization of continuous hyperparameters,
Bayesian optimization based on Gaussian processes (Rasmussen & Williams, 2006) is known as the
most effective method. While for joint structure search and hyperparameter optimization, tree-based
Bayesian optimization optimization methods (Hutter et al., 2011; Bergstra et al., 2011) are known
to perform better (Bergstra et al.; Eggensperger et al., 2013; Domhan et al., 2015), here we focus on
continuous optimization. We note that integer parameters with rather wide ranges (e.g., number of
filters) can, in practice, be considered to behave like continuous hyperparameters.
As the evaluation of a DNN hyperparameter setting requires fitting a model and evaluating its perfor-
mance on validation data, this process can be very expensive, which often renders sequential hyper-
parameter optimization on a single computing unit infeasible. Unfortunately, Bayesian optimization
is sequential by nature: while a certain level of parallelization is easy to achieve by conditioning
decisions on expectations over multiple hallucinated performance values for currently running hy-
perparameter evaluations (Snoek et al., 2012a) or by evaluating the optima of multiple acquisition
functions concurrently (Hutter et al., 2012), perfect parallelization appears unattainable since the
decisions in each step depend on all data points gathered so far. Here, we study the use of a different
type of derivative-free continuous optimization method that allows for perfect parallelization.
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES (Hansen & Ostermeier, 2001))
is a state-of-the-art optimizer for continuous black-box functions. While Bayesian optimization
methods often perform best for small function evaluation budgets (e.g., below 10 times the number
of hyperparameters being optimized), CMA-ES tends to perform best for larger function evaluation
budgets; for example, Loshchilov et al. (2013) showed that CMA-ES performed best among more
than 100 classic and modern optimizers on a wide range of blackbox functions.
In a nutshell,
CMA-ES is an iterative algorithm, that, in each of its iterations, samples λ candidate solutions
from a multivariate normal distribution, evaluates these and then adjusts the sampling distribution
used for the next iteration to give higher probability to good samples. Usual values for the so-called
population size λ are around 10 to 20; in the study we report here, we used a larger size λ = 30 to
take full benefit of 30 GeForce GTX TITAN Black GPUs we had available. Larger values of λ are
also known to be helpful for noisy and multi-modal problems. Since all variables are scaled to be
in [0,1], we set the initial sampling distribution to N (0.5, 0.22). We didn’t try to employ any noise
reduction techniques (Hansen et al., 2009) or surrogate models (Loshchilov et al., 2012).
In the study we report here, we used AdaDelta (Zeiler, 2012) and Adam (Kingma & Ba, 2014)
to train DNNs on the MNIST dataset (50k original training and 10k original validation examples).
The 19 hyperparameters describing the network structure and the learning algorithms are given in
1
Workshop track - ICLR 2016
Figure 1: Top: Best validation errors found for AdaDelta and Adam with and without batch se-
lection when hyperparameters are optimized by CMA-ES with training time budgets of 5 and 30
minutes. Bottom: Validation errors for Adam with batch selection when solutions are evaluated
Bottom-Left: sequentially for 5 minutes each; Bottom-Right in parallel for 30 minutes each.
Table 1; the code is also available at https://sites.google.com/site/cmaesfordnn/
(anonymous for the reviewers). We considered both the default (shuffling) and online loss-based
batch selection of training examples (Loshchilov & Hutter, 2015). The objective function is the
smallest validation error found in all epochs when the training time (including the time spent on
model building) is limited.
The baseline we compare CMA-ES to is GP-based Bayesian optimization, as implemented by
the widely known Spearmint system (Snoek et al., 2012a) (available at https://github.
com/HIPS/Spearmint).
In particular, we compared to Bayesian optimization with two dif-
ferent acquisition functions: (i) Expected Improvement (EI), as described by Snoek et al. (2012b)
and implemented in the main branch of Spearmint; and (ii) Predictive Entropy Search (PES),
as described by Hern´andez-Lobato et al. (2014) and implemented in a sub-branch of Spearmint
(available at https://github.com/HIPS/Spearmint/tree/PESC). Experiments by
Hern´andez-Lobato et al. (2014) demonstrated that PES is superior to EI; our own (unpublished)
preliminary experiments on the black-box benchmarks used for the evaluation of CMA-ES by
Loshchilov et al. (2013) also confirmed this. Both EI and PES have an option to notify the method
about whether the problem at hand is noisy or noiseless. To avoid a poor choice on our side, we
ran both algorithms in both regimes. Similarly to CMA-ES, to benefit from parallel evaluations in
EI&PES, we set the maximum number of concurrent jobs in Spearmint to 30.
Figure 1 (top) shows the results of running CMA-ES on 30 GPUs on eight different hyperparameter
optimization problems: all combinations of using (1) AdaDelta (Zeiler, 2012) or Adam (Kingma &
Ba, 2014); (2) standard shuffling batch selection or batch selection based on the latest known loss
(Loshchilov & Hutter, 2015); and (3) allowing 5 minutes or 30 minutes of network training time. We
2
1001011021030.250.30.350.40.450.50.550.60.650.7Number of function evaluationsValidation error in %Network training time: 5 minutesCMA−ES: AdaDelta + selectionCMA−ES: AdaDeltaCMA−ES: Adam + selectionCMA−ES: Adam1001011021030.250.30.350.40.450.50.550.60.650.7Number of function evaluationsValidation error in %Network training time: 30 minutesCMA−ES: AdaDelta + selectionCMA−ES: AdaDeltaCMA−ES: Adam + selectionCMA−ES: Adam10203040500.50.60.70.80.911.11.21.31.41.5Number of function evaluationsValidationerror in %Network training time: 5 minutesCMA−ESnoiseless PESnoiseless EInoisy PESnoisy EI1001011021030.250.30.350.40.450.50.550.60.650.7Number of function evaluationsValidation error in %Network training time: 30 minutesCMA−ESnoiseless PESnoiseless EInoisy PESnoisy EIWorkshop track - ICLR 2016
note that in all cases CMA-ES steadily improved the best validation error over time and in the best
case yielded validation errors below 0.3% in a network trained for only 30 minutes (and 0.42% for
a network trained for only 5 minutes). We also note that batch selection based on the latest known
loss performed better than shuffling batch selection and that the results of AdaDelta and Adam were
almost indistinguishable. Therefore, the rest of the figure shows only the case of Adam with batch
selection based on the latest known loss.
Figure 1 (bottom) compares the results of CMA-ES vs. Bayesian optimization with EI&PES. In this
figure, to illustrate the actual function evaluations, each evaluation within the range of the y-axis is
depicted by a dot. Figure 1 (bottom left) shows the results of all tested algorithms when solutions
are evaluated sequentially with a relatively small network training time of 5 minutes each. Note that
we use CMA-ES with λ = 30 and thus the first 30 solutions are sampled from the prior isotropic
(not yet adapted) Gaussian with a mean of 0.5 and standard deviation of 0.2. Apparently, the results
of this sampling are as good as the ones produced by EI&PES. This might be because of a bias to-
wards the middle of the range, or because EI&PES do not work well on this noisy high-dimensional
problem, or because of both. Quite in line with the conclusion of Bergstra & Bengio (2012), it
seems that the presence of noise and rather wide search ranges of hyperparameters make sequen-
tial optimization with small budgets rather inefficient, i.e., as efficient as random sampling. One
way to combat this would be to support prior distributions over good parameter ranges in GP-based
Bayesian optimization, but to date no system implements this.
Figure 1 (bottom right) shows the results of all tested algorithms when solutions are evaluated
in parallel on 30 GPUs. Each DNN now trains for 30 minutes, meaning that, for each optimizer,
running this experiment sequentially would take 30 000 minutes (or close to 21 days) on one GPU;
in parallel on 30 GPUs, it only required 17 hours. Compared to the sequential 5-minute setting, the
greater budget of the parallel setting allowed CMA-ES to improve results such that most of its latest
solutions had validation error below 0.4%. The internal cost of CMA-ES was virtually zero, but it
was a substantial factor for EI&PES due to the cubic complexity of standard GP-based Bayesian
optimization: after having evaluated 100 configurations, it took roughly 30 minutes to generate
30 new configurations to evaluate, and as a consequence 300 evaluations by EI&PES took more
wall-clock time than 1000 evaluations by CMA-ES. This problem could be addressed by using
approximate GPs Rasmussen & Williams (2006) or another efficient multi-core implementation
of Bayesian Optimization, such as the one by Snoek et al. (2015). However, the performance of
EI&PES in terms of the validation error was also inferior to the one of CMA-ES. One reason might
be that this benchmark was too noisy and high-dimensional for EI&PES.
In conclusion, we propose to consider CMA-ES as one alternative in the mix of methods for hy-
perparameter optimization of DNNs. It is powerful, computationally cheap and natively supports
parallel evaluations. Our preliminary results suggest that CMA-ES can be competitive especially in
the regime of parallel evaluations. However, we still need to carry out a much broader and more
detailed comparison, involving more test problems and the tree-based Bayesian optimization algo-
rithms TPE (Bergstra et al., 2011) and SMAC (Hutter et al., 2011).
REFERENCES
Bergstra, J. and Bengio, Y. Random search for hyper-parameter optimization. JMLR, 13:281–305, 2012.
Bergstra, J., Yamins, D., and Cox, D. Making a science of model search: Hyperparameter optimization in
hundreds of dimensions for vision architectures. pp. 115–123.
Bergstra, J., Bardenet, R., Bengio, Y., and K´egl, B. Algorithms for hyper-parameter optimization. In Proc. of
NIPS’11, pp. 2546–2554, 2011.
Domhan, Tobias, Springenberg, Jost Tobias, and Hutter, Frank. Speeding up automatic hyperparameter opti-
mization of deep neural networks by extrapolation of learning curves. In Proc. of IJCAI’15, pp. 3460–3468,
2015.
Eggensperger, K., Feurer, M., Hutter, F., Bergstra, J., Snoek, J., Hoos, H., and Leyton-Brown, K. Towards
an empirical foundation for assessing Bayesian optimization of hyperparameters. In Proc. of BayesOpt’13,
2013.
Hansen, Nikolaus and Ostermeier, Andreas. Completely derandomized self-adaptation in evolution strategies.
Evolutionary computation, 9(2):159–195, 2001.
3
Workshop track - ICLR 2016
Table 1: Hyperparameters descriptions, pseudocode transformations and ranges
name
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
x11
x12
x13
x14
x15
x16
x17
x14
x15
x16
x17
x18
x19
description
selection pressure at e0
selection pressure at eend
batch size at e0
batch size at eend
frequency of loss recomputation rf req
alpha for batch normalization
epsilon for batch normalization
dropout rate after the first Max-Pooling layer
dropout rate after the second Max-Pooling layer
dropout rate before the output layer
number of filters in the first convolution layer
number of filters in the second convolution layer
number of units in the fully-connected layer
Adadelta: learning rate at e0
Adadelta: learning rate at eend
Adadelta: ρ
Adadelta: (cid:15)
Adam: learning rate at e0
Adam: learning rate at eend
Adam: β1
Adam: (cid:15)
Adam: β2
adaptation end epoch index eend
transformation
10−2+102x1
10−2+102x2
24+4x3
24+4x4
2x5
0.01 + 0.2x6
10−8+5x7
0.8x8
0.8x9
0.8x10
23+5x11
23+5x12
24+5x13
100.5−2x14
100.5−2x15
0.8 + 0.199x16
10−3−6x17
10−1−3x14
10−3−3x15
0.8 + 0.199x16
10−3−6x17
1 − 10−2−2x18
20 + 200x19
range
[10−2, 1098]
[10−2, 1098]
[24, 28]
[24, 28]
[0, 2]
[0.01, 0.21]
[10−8, 10−3]
[0, 0.8]
[0, 0.8]
[0, 0.8]
[23, 28]
[23, 28]
[24, 29]
[10−1.5, 100.5]
[10−1.5, 100.5]
[0.8, 0.999]
[10−9, 10−3]
[10−4, 10−1]
[10−6, 10−3]
[0.8, 0.999]
[10−9, 10−3]
[0.99, 0.9999]
[20, 220]
Hansen, Nikolaus, Niederberger, Andr´e SP, Guzzella, Lino, and Koumoutsakos, Petros. A method for handling
uncertainty in evolutionary optimization with an application to feedback control of combustion. Evolutionary
Computation, IEEE Transactions on, 13(1):180–197, 2009.
Hern´andez-Lobato, Jos´e Miguel, Hoffman, Matthew W, and Ghahramani, Zoubin. Predictive entropy search
for efficient global optimization of black-box functions. In Proc. of NIPS’14, pp. 918–926, 2014.
Hutter, F., Hoos, H., and Leyton-Brown, K. Sequential model-based optimization for general algorithm config-
uration. In Proc. of LION’11, pp. 507–523, 2011.
Hutter, F., Hoos, H., and Leyton-Brown, K. Parallel algorithm configuration. In Proc. of LION’12, pp. 55–70,
2012.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization.
arXiv preprint
arXiv:1412.6980, 2014.
Loshchilov, Ilya and Hutter, Frank. Online batch selection for faster training of neural networks. arXiv preprint
arXiv:1511.06343, 2015.
Loshchilov, Ilya, Schoenauer, Marc, and Sebag, Michele. Self-adaptive surrogate-assisted covariance matrix
adaptation evolution strategy. In Proc. of GECCO’12, pp. 321–328. ACM, 2012.
Loshchilov, Ilya, Schoenauer, Marc, and Sebag, Mich`ele. Bi-population cma-es agorithms with surrogate
models and line searches. In Proc. of GECCO’13, pp. 1177–1184. ACM, 2013.
Rasmussen, C. and Williams, C. Gaussian Processes for Machine Learning. The MIT Press, 2006.
Snoek, J., Larochelle, H., and Adams, R. P. Practical Bayesian optimization of machine learning algorithms.
In Proc. of NIPS’12, pp. 2960–2968, 2012a.
Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical bayesian optimization of machine learning
algorithms. In Advances in neural information processing systems, pp. 2951–2959, 2012b.
Snoek, Jasper, Rippel, Oren, Swersky, Kevin, Kiros, Ryan, Satish, Nadathur, Sundaram, Narayanan, Patwary,
Md, Ali, Mostofa, Adams, Ryan P, et al. Scalable bayesian optimization using deep neural networks. arXiv
preprint arXiv:1502.05700, 2015.
Zeiler, Matthew D. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
4
| success |
|
0YrnoNZ7PTGJ7gK5tNYY | VARIATIONAL STOCHASTIC GRADIENT DESCENT | [
"Michael Tetelman"
] | In Bayesian approach to probabilistic modeling of data we select a model for probabilities of data that depends on a continuous vector of parameters. For a given data set Bayesian theorem gives a probability distribution of the model parameters. Then the inference of outcomes and probabilities of new data could be found by averaging over the parameter distribution of the model, which is an intractable problem. In this paper we propose to use Variational Bayes (VB) to estimate Gaussian posterior of model parameters for a given Gaussian prior and Bayesian updates in a form that resembles SGD rules. It is shown that with incremental updates of posteriors for a selected sequence of data points and a given number of iterations the variational approximations are defined by a trajectory in space of Gaussian parameters, which depends on a starting point defined by priors of the parameter distribution, which are true hyper-parameters. The same priors are providing a weight decay or L2 regularization for the training. Then a selection of L2 regularization parameters and a number of iterations is completely defining a learning rule for VB SGD optimization, unlike other methods with momentum (Duchi et al., 2011; Kingma & Ba, 2014; Zeiler, 2012) that need selecting learning, regularization rates, etc., separately. We consider application of VB SGD for important practical case of fast training neural networks with very large data. While the speedup is achieved by partitioning data and training in parallel the resulting set of solutions obtained with VB SGD forms a Gaussian mixture. By applying VB SGD optimization to the Gaussian mixture we can merge multiple neural networks of same dimensions into a new single neural network that has almost the same performance as an original Gaussian mixture.
| [
"data",
"model",
"probabilities",
"model parameters",
"parameter distribution",
"number",
"iterations",
"priors",
"training",
"vb sgd optimization"
] | https://openreview.net/pdf?id=0YrnoNZ7PTGJ7gK5tNYY | https://openreview.net/forum?id=0YrnoNZ7PTGJ7gK5tNYY | ICLR.cc/2016/workshop | 2016 | {
"note_id": [
"OM0mBBk7Gcp57ZJjtNPw",
"lx9ZgxVyvt2OVPy8Cvq7",
"oVg3PAMLzcrlgPMRsB6n"
],
"note_type": [
"review",
"review",
"official_review"
],
"note_created": [
1457640787797,
1457616172597,
1457609226600
],
"note_signatures": [
[
"ICLR.cc/2016/workshop/paper/23/reviewer/10"
],
[
"~Tapani_Raiko1"
],
[
"~Jose_Miguel_Hernandez_Lobato1"
]
],
"structured_content_str": [
"{\"title\": \"Below the bar\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper proposes a method for online updates of a variational approximation of the posterior over neural network weights. No experimental evaluation is provided. The presentation is intelligible, but far from clear.\\n\\nThe idea of using a recursive variational Bayes approximation for streaming data was proposed in Broderick et al.'s SDA-Bayes paper (http://papers.nips.cc/paper/4980-streaming-variational-bayes). But as another reviewer noted, online variational inference has been around since at least Sato's 2001 paper on online model selection with variational Bayes, and in a sense since the 1998 Neal and Hinton paper on incremental EM.\\n\\nThere have been plenty of papers about variational inference for neural networks, for example, Graves's Practical Inference for Neural Networks (2011) or Hinton's original 1993 variational inference/MDL paper (http://dl.acm.org/citation.cfm?id=168306).\\n\\nThe idea of using the variational distribution's variance to control step size is interesting. It's sort of related to recent papers that use trust regions/prox algorithms to optimize variational approximations (Theis&Hoffman, 2015; Khan et al., 2015).\\n\\nHowever, that doesn't mean it will work. With no experimental validation, it's impossible to say whether this is anything more than a cute idea.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Not mature enough even for a workshop presentation\", \"rating\": \"3: Clear rejection\", \"review\": \"Manuscript describes variational Bayesian (VB) treatment of weights in neural networks and online learning for them.\\n\\nSimilar ideas have been studied recently, for instance in\", \"http\": \"//jmlr.org/proceedings/papers/v37/blundell15.pdf\\nbut relationship to existing work is not presented clearly. Instead, using VB for network weights is presented as something novel.\\n\\nThere is no clear theoretical contribution or any experiments.\", \"there_is_one_crucial_error_as_well\": \"Bottom of page 3 writes that \\\"...distribution of the whole ensemble is a mix of...\\\" whereas the equation on the top of page 3 is a product rather than a mixture.\\n\\nThis paper might be of interest to the author, covering similar ideas:\", \"https\": \"//www.hiit.fi/u/ahonkela/papers/ica2003.pdf\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review for VARIATIONAL STOCHASTIC GRADIENT DESCENT\", \"rating\": \"3: Clear rejection\", \"review\": \"The authors propose an approach for the on-line maximization of the variational lower bound. The new method is based on iterating over the data and solving individual optimization problems between the current posterior approximation and the product of that posterior approximation and the likelihood function for the current data point. The advantages of the proposed approach with respect to other variational techniques is that it does not require to use learning rates or compute complicated expectations with respect to the variational approximation.\", \"quality\": \"The proposed approach is not validated in any form of experiments. It is not clear how well it is going to work since variational Bayes is known to under-estimate variance and its application in an on-line manner could make more significant this problem because of consecutive under-estimation of variances at each iteration. Another problem is that there is no guarantee that the proposed approach is going to converge to any local minimizer of the original variational bound. In fact, by looking at equation 5, the update for the variance produces increasingly small variances. This means that the proposed approach would converge to a point mass at the mean of the posterior approximation q.\\n\\nThe mixture of Gaussians in Section 3 does not seem to be the correct approach. The correct approach would be to compute the product of all these Gaussians to obtain a final Gaussian approximation (accounting for the prior being repeated multiple times). The correct approach is given in\\n\\nExpectation propagation as a way of life\\nAndrew Gelman, Aki Vehtari, Pasi Jyl\\u00e4nki, Christian Robert, Nicolas Chopin, John P. Cunningham\", \"http\": \"//arxiv.org/abs/1412.4869\", \"clarity\": \"The work needs to be improved for clarity. It is not clear how equation 4 is obtained. The equation above equation 4 seems to come from performing a Laplace approximation. The authors should clarify this possible connexion with the Laplace approximation.\", \"originality\": \"The approach proposed seems to be original up to my knowledge.\", \"significance\": \"It is not clear how significant the proposed method is since one can use stochastic optimization to optimize the variational lower bound. The approach for training neural networks fast by splitting the data seems to be wrong.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} | Workshop track - ICLR 2016
VARIATIONAL STOCHASTIC GRADIENT DESCENT
Michael Tetelman
InvenSense, Inc
[email protected]
ABSTRACT
In Bayesian approach to probabilistic modeling of data we select a model for prob-
abilities of data that depends on a continuous vector of parameters. For a given
data set Bayesian theorem gives a probability distribution of the model parame-
ters. Then the inference of outcomes and probabilities of new data could be found
by averaging over the parameter distribution of the model, which is an intractable
problem.
In this paper we propose to use Variational Bayes (VB) to estimate
Gaussian posterior of model parameters for a given Gaussian prior and Bayesian
updates in a form that resembles SGD rules. It is shown that with incremental
updates of posteriors for a selected sequence of data points and a given number
of iterations the variational approximations are defined by a trajectory in space
of Gaussian parameters, which depends on a starting point defined by priors of
the parameter distribution, which are true hyper-parameters. The same priors are
providing a weight decay or L2 regularization for the training. Then a selection
of L2 regularization parameters and a number of iterations is completely defining
a learning rule for VB SGD optimization, unlike other methods with momentum
(Duchi et al., 2011; Kingma & Ba, 2014; Zeiler, 2012) that need selecting learn-
ing, regularization rates, etc., separately. We consider application of VB SGD
for important practical case of fast training neural networks with very large data.
While the speedup is achieved by partitioning data and training in parallel the
resulting set of solutions obtained with VB SGD forms a Gaussian mixture. By
applying VB SGD optimization to the Gaussian mixture we can merge multiple
neural networks of same dimensions into a new single neural network that has
almost the same performance as an original Gaussian mixture.
1 BAYESIAN METHOD
In Bayesian approach to probabilistic modeling of data we select a family of models for probabilities
of data that generally depends on a continuous vector of parameters (MacKay, 1995; Bishop, 1995).
Let P1(y|(cid:126)x, (cid:126)w) be a conditional probability of label y given input vector (cid:126)x that depends on a vector
of continuous parameters (cid:126)w.
Then for an observed data given as pairs {(cid:126)xt, yt}, t = 1 . . . T, the Bayesian theorem defines a
probability distribution of model parameters:
P rob( (cid:126)w) ∝ P0( (cid:126)w)
T
(cid:89)
t=1
P1(yt|(cid:126)xt, (cid:126)w).
(1)
Here, P0( (cid:126)w) is a prior probability distribution of model parameters (cid:126)w.
With Bayesian method the inference of outcomes, probabilities of new data and other values of
interest could be found by computing averages with the parameter distribution of the model. For
example, the probability of a label y given a new never observed input (cid:126)x is obtained by the following
expression:
1
Workshop track - ICLR 2016
(cid:90)
P rob(y|(cid:126)x) =
(cid:32)
d (cid:126)wP1(y|(cid:126)x, (cid:126)w)
P0( (cid:126)w)
T
(cid:89)
t=1
(cid:33)
P1(yt|(cid:126)xt, (cid:126)w)
/
(cid:90)
(cid:32)
d (cid:126)w
P0( (cid:126)w)
T
(cid:89)
t=1
(cid:33)
P1(yt|(cid:126)xt, (cid:126)w)
However, computing Bayesian integrals over parameters (cid:126)w with the parameter distribution above
is a difficult problem. A standard approach is to find a single point (cid:126)w0 in w−parameter space
- a maximum of the parameter distribution. With this maximum likelihood method a parameter
distribution is simplified to become a delta-function
P rob( (cid:126)w) = δ( (cid:126)w − (cid:126)w0), (cid:126)w0 = arg max
(cid:126)w
(cid:32)
P0( (cid:126)w)
T
(cid:89)
t=1
(cid:33)
P1(yt|(cid:126)xt, (cid:126)w
.
Variational Bayes method allows to obtain an approximation of the probability distribution over
parameters in a form that could make possible computing of the integrals (Bishop, 2006).
With VB we can find distributions that are less trivial than a delta-function and still manageable to
compute the averages of interests.
In this paper we propose to use Variational Bayes to estimate Gaussian posterior of parameters for a
given Gaussian prior and Bayesian updates with a given model of data.
To do that we will use the following trick and Jensen’s inequality for average of exponential to
transform the Bayesian integral with some probability P ( (cid:126)w) to a better form:
(cid:90)
(cid:90)
d (cid:126)wP ( (cid:126)w) =
d (cid:126)wQ( (cid:126)w|φ)
P ( (cid:126)w)
Q( (cid:126)w|φ)
(cid:18)(cid:90)
≥ exp
d (cid:126)wQ( (cid:126)w|φ) ln
P ( (cid:126)w)
Q( (cid:126)w|φ)
(cid:19)
.
(2)
Here, a new probability distribution Q( (cid:126)w|φ) is a variational approximation for probability distri-
bution P ( (cid:126)w). The distribution Q( (cid:126)w|φ) depends on a set of parameters φ. By maximizing the
integral on right side of the equation above over parameters φ we can find the distribution Q( (cid:126)w|φ)
that is a best approximation for P ( (cid:126)w). The right side of eq.2 contains a negative of a well-known
Kullback-Leibler (KL) divergence for distributions Q and P . So the best Q is the one that minimizes
KL-divergence in eq.2.
2 VARIATIONAL BAYES SGD
We will consider a distribution Q( (cid:126)w) that is a product of Gaussian distributions for all components
of vector (cid:126)w:
Q( (cid:126)w|(cid:126)µ, (cid:126)σ) =
− (wi−µi)2
2σ2
i
(cid:112)2πσ2
i
(cid:89)
e
i
to approximate the distribution P rob( (cid:126)w) in eq.1.
A direct computing of integral in KL-divergence with Gaussian distribution Q(w) and P rob(w) is
still a difficult problem. This problem could be solved with the following iterative approach.
The distribution P rob( (cid:126)w) in eq.1 consists of a product of prior distribution for (cid:126)w and probabilities
of observed data points up to some normalization constant. We can consider an effect of observed
data as a Bayesian update of the prior distribution P0( (cid:126)w) to the posterior distribution Q( (cid:126)w). To
make this update accurate we can do it incrementally in N iterations by using a fraction of a data
point contribution at the time.
Let’s use a Gaussian prior P0( (cid:126)w). Then it is equal to Q0( (cid:126)w) = Q( (cid:126)w| (cid:126)µ0, (cid:126)σ0) for some parameters
( (cid:126)µ0, (cid:126)σ0).
Because for large enough N the contribution of data in eq.1 can be represented as a product of
factors where each factor is close to 1
2
Workshop track - ICLR 2016
P rob( (cid:126)w) ∝ Q0( (cid:126)w)
(cid:35)N
P1(yt|(cid:126)xt, (cid:126)w)
1
N
(cid:34) T
(cid:89)
t=1
we can replace Q0( (cid:126)w)P1(yt|(cid:126)xt, (cid:126)w)1/N with Q1( (cid:126)w), where Q1( (cid:126)w) minimizes KL-divergence
Q1,t( (cid:126)w) = Q( (cid:126)w)|(cid:126)µ1,t, (cid:126)σ1,t), ((cid:126)µ1,t, (cid:126)σ1,t) = arg max
(cid:126)µ,(cid:126)σ
(cid:90)
d (cid:126)wQ( (cid:126)w|(cid:126)µ, (cid:126)σ) ln
Q0( (cid:126)w)[P1(yt|(cid:126)xt, (cid:126)w)] 1
Q( (cid:126)w|(cid:126)µ, (cid:126)σ)
N
(3)
Q1( (cid:126)w) is a Bayesian update of prior Q0( (cid:126)w) from a 1/N -fraction of a data point t. By repeating these
Bayesian updates for each data point and iteration n we will find a sequence of approximations
Qn( (cid:126)w) → Qn+1( (cid:126)w), Qn+1( (cid:126)w) = Q( (cid:126)w|(cid:126)µn+1, (cid:126)σn+1),
((cid:126)µn+1, (cid:126)σn+1) = arg max
(cid:126)µ,(cid:126)σ
(cid:90)
d (cid:126)wQ( (cid:126)w|(cid:126)µ, (cid:126)σ) ln
Qn( (cid:126)w)[P1(yt|(cid:126)xt, (cid:126)w)] 1
Q( (cid:126)w|(cid:126)µ, (cid:126)σ)
N
with a final QN ( (cid:126)w) approximating P rob( (cid:126)w) in eq.1.
We will compute the integral above in the limit of small variances σ2
(cid:126)µ and keeping only leading terms, then
i by expanding P1( (cid:126)w) around
(cid:90)
d (cid:126)wQ( (cid:126)w|(cid:126)µ, (cid:126)σ) ln P1( (cid:126)w) ≈ ln P1((cid:126)µ) +
(cid:88)
i
1
2
σ2
i
∂2
∂w2
i
ln P1( (cid:126)w)| (cid:126)w=(cid:126)µ.
Now, by maximizing over (cid:126)µ and (cid:126)σ we can obtain the VB SGD update rules for a single data point:
µn+1,i = µn,i +
σ2
n,i
N
∂
∂wi
ln P1(yt|(cid:126)xt, (cid:126)w)| (cid:126)w=(cid:126)µn ,
1
σ2
n+1,i
=
1
σ2
n,i
−
1
N
∂2
∂w2
i
ln P1(yt|(cid:126)xt, (cid:126)w)| (cid:126)w=(cid:126)µn
(4)
The term with second derivative in the equation 4 after iterating over a whole data set can be consid-
ered as an average over empirical distribution q(x, y): (cid:80)
x,y q(x, y)δ2 ln P1(y|x, w). That average
satisfies the following identity: < δ2 ln P >=< δ2P/P > − < (δ ln P )2 >. If a model probability
P is close enough to an empirical probability we can neglect a term with second derivative of P and
keep only term with a square of first derivative of the log of probability.
Then finally, we have the VB SGD update rule for σ with the first order gradient:
1
σ2
n+1,i
=
1
σ2
n,i
+
1
N
(cid:18) ∂
∂wi
(cid:19)2
ln P1(yt|(cid:126)xt, (cid:126)w)
| (cid:126)w=(cid:126)µn
(5)
3 MERGING MULTIPLE MODELS
When training multiple models of same dimensions on different partitions of data the VB SGD
gives us a Gaussian distribution for each model and a distribution of the whole ensemble is a mix of
Gaussian distributions. We apply VB SGD to find a single Gaussian distribution that approximates
the mix: Gmix( (cid:126)w) = 1/T (cid:80)
The update rule is the same as in eq.4, only instead of P1 we use ratio Gmix( (cid:126)w)/Q( (cid:126)w|(cid:126)µn, (cid:126)σn).
t Gt( (cid:126)w).
3
Workshop track - ICLR 2016
REFERENCES
C. M. Bishop. Neural networks for pattern recognition. Oxford University Press, 1995.
Christopher M. Bishop. Pattern Recognition and Machine Learning.
Information Science and
Statistics. Springer, New York, 2nd edition, 2006.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. JMLR, 999999:2121–2159, 2011.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR,
abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980.
D. J. C. MacKay. Probable networks and plausible predictions — a review of practical Bayesian
methods for supervised neural networks. Network: Computation in Neural Systems, 6:469–505,
1995.
Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. CoRR, abs/1212.5701, 2012.
4
| success |
|
oVgo1Xo3KTrlgPMRsBVZ | Manifold traversal using density ridges | [
"Jonas Nordhaug Myhre",
"Michael Kampffmeyer",
"Robert Jenssen"
] | In this work we present two examples of how a manifold learning model can represent the complexity of shape variation in images.
Manifold learning techniques for image manifolds can be used to model data in sparse manifold regions.
Additionally, they can be used as generative models as they can often better represent or learn structure in the data.
We propose a method of estimating the underlying manifold using the ridges of a kernel density estimate as well as tangent space operations that allows interpolation between images along the manifold and offers a novel approach to analyzing the image manifold. | [
"manifold",
"density ridges",
"images",
"data",
"manifold traversal",
"traversal",
"work",
"present",
"examples",
"model"
] | https://openreview.net/pdf?id=oVgo1Xo3KTrlgPMRsBVZ | https://openreview.net/forum?id=oVgo1Xo3KTrlgPMRsBVZ | ICLR.cc/2016/workshop | 2016 | {
"note_id": [
"91ExXVyyBtkRlNvXUVZ1",
"r8l3KljEJc8wknpYt543"
],
"note_type": [
"review",
"review"
],
"note_created": [
1458248580292,
1456695880726
],
"note_signatures": [
[
"ICLR.cc/2016/workshop/paper/192/reviewer/12"
],
[
"ICLR.cc/2016/workshop/paper/192/reviewer/11"
]
],
"structured_content_str": [
"{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper proposes to perform image synthesis or reconstruction with help of a manifold that capture the image shape variations. A manifold is estimated from the data, and then synthesis is performed. Manifold models are reasonable in some image reconstruction problems, and often provide elegant solutions.\\n\\nThe ideas and results in this short paper are correct. However, the paper does not present any novelty unfortunately: the problem, the framework are not new. And the tools used for manifold learning, or for image reconstruction are classical too. \\n\\nDue to the very limited novelty, this paper is unfortunately below the threshold of acceptance for ICLR.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review: Manifold Traversal using Density Ridges\", \"rating\": \"3: Clear rejection\", \"review\": \"This is a short paper that studies density ridges in manifolds using the framework proposed by Ozertem and Erdogmus: it preprocesses the data using PCA, identifies density ridges by following the principal eigenvector of the local manifold Hessian, and projects the data onto the ridge estimates using an approach proposed in prior work by Dollar et al. Embeddings and interpolation results are shown on the MNIST and Frey faces dataset.\\n\\nAlthough I may have misunderstood parts of the proposed approach due to the brevity of the submission (even in the short workshop format, I believe it possible to provide a bit more details), the novelty of the paper appears limited: it is a straightforward combination of prior work by Ozertem and Erdogmus and by Dollar et al. The paper presents no comparisons with prior work (neither experimental nor conceptual), which makes it difficult to gauge the contribution of the paper. In particular, it remains unclear what the goal of this line of work is. Is it to learn better feature representations from data? In that case, the study should present experiments aimed at evaluating the quality of the learned representation; the visualizations in Figure 1a, 2a, and 3a do not achieve this goal (in particular, since it is known that non-parametric techniques such as t-SNE can produce scatter-plot visualizations of much higher quality than PCA). Or is it to learn better models for image generation / interpolation? In that case, the study should develop methods to evaluate the quality of generated images, and perform comparisons with techniques that try to achieve the same (GPLVMs, mixtures of Bernoulli models, fields of experts, generative-adversarial networks, etc.).\\n\\nOverall, I believe this paper is of insufficient novelty and quality to be accepted at ICLR.\", \"minor_comment\": \"\\\"As long as the embedding space is of higher dimension than the manifold a linear method causes no harm.\\\" -> If by the dimensionality of the manifold the authors mean its intrinsic dimensionality, then this statement is incorrect. For instance, consider a manifold that is one-dimensional space-filling curve living in a 10-dimensional space. The dimensionality of the manifold is one, but a linear method needs to preserve all 10 dimensions in the data to prevent distant parts of the manifold from collapsing.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} | Workshop track - ICLR 2016
MANIFOLD TRAVERSAL USING DENSITY RIDGES
Jonas N. Myhre, Michael Kampffmeyer, & Robert Jenssen ∗
Department of Physics and Technology
University of Tromsø– The Arctic University of Norway
{jonas.n.myhre,michael.c.kampffmeyer,robert.jenssen}@uit.no
ABSTRACT
In this work we present two examples of how a manifold learning model can rep-
resent the complexity of shape variation in images. Manifold learning techniques
for image manifolds can be used to model data in sparse manifold regions. Addi-
tionally, they can be used as generative models as they can often better represent
or learn structure in the data. We propose a method of estimating the underlying
manifold using the ridges of a kernel density estimate as well as tangent space
operations that allows interpolation between images along the manifold and offers
a novel approach to analyzing the image manifold.
1
INTRODUCTION
The manifold hypothesis, where high-dimensional data is assumed to be concentrated on or near a
smooth surface of much lower dimension, is a key concept in learning how to represent data. In this
work we present two clear examples of real data sets that are concentrated around low-dimensional
manifolds. We apply recent work on estimating and working with low-dimensional manifolds by
(Ozertem & Erdogmus, 2011) and (Doll´ar et al., 2007) on these examples as a proof-of-concept. We
describe a complete workflow on the manifold, illustrated by traversal along the manifold.
2 METHOD
We use principal component analysis to reduce the dimension of the space the manifold is embedded
in. As long as the embedding space is of higher dimension than the manifold a linear method causes
no harm. We assume that the dimension of the low-dimensional manifold is known. The mapping
of a high dimensional image feature vector x to its low dimensional representation z becomes z =
U T x, where U is the PCA projection matrix.
Once the ambient dimension is reduced, the manifold can be estimated by a range of techniques. In
this work we use the density ridge framework, (Ozertem & Erdogmus, 2011), by solving differential
equations as described in (Shaker et al., 2014). The density ridges can be shown theoretically to be
close to the underlying manifold bounded by Hausdorff distance (Genovese et al., 2014). The ridge
⊥(x)g(x), where Q⊥(x) is the
can be found by following the Hessian eigenvector flow
orthogonal subspace of eigenvectors of the Hessian.
.
x= Q⊥(x)QT
Given an estimate of the underlying manifold, we can use the work presented in Section 4 in Doll´ar
et al. (2007), based on the local tangent spaces of the manifold, to work with the manifold. Due to
space limitations we choose to focus on geodesic distances. To approximate geodesics on the esti-
mated manifold a scheme alternating between keeping points projected onto the ridge and gradient
descent to shorten distances between points on the manifold.
Finally, to approximately reconstruct the image (cid:101)x from the low dimensional representation z, two
approaches were tested. The first approach utilizes the orthogonal PCA projection matrix to reverse
the linear transformation via x = U z. Alternatively, a non-linear mapping was used, by training a
small 2-layer MLP (60 hidden units) for reconstruction on parts of the data. The advantage of using
the inverse PCA is that no additional training is required. Eventhough the MLP requires enough
∗http://site.uit.no/ml/
1
Workshop track - ICLR 2016
(a) The ridge for the MNIST dataset
(b) Projection from noisy points onto ridge
Figure 1: Density ridge estimation results for the MNIST dataset
Figure 2: The interpolation results for the MNIST dataset using MLP reconstruction
training data to learn the inverse mapping, our results illustrate that its reconstruction is generally
better.
3 EXPERIMENTAL RESULTS
We evaluate our method on two real-world datasets, namely the ones in the MNIST dataset (LeCun
et al., 1998) and the Frey Faces dataset 1. These datasets are both known to have a low dimensional
manifold structure in a high dimensional image space.
3.1 MNIST
Figure 1a shows the result of the dimensionality reduction for the MNIST dataset using PCA (in
blue), as well as the manifold estimation using density ridge estimation (in green). Figure 1b shows
the effect of the density ridge estimation, where the data point in blue is projected onto the estimated
manifold in red. A clear 3-dimensional manifold structure can be observed. Interpolation along the
geodesic path shown by the red curve in Figure 1a, yields the results in Figure 2. The first image (top
left) and the last image (bottom right) are images from the MNIST dataset, which were projected
onto the estimated manifold. All images inbetween are interpolated images along the geodesic path
and are not in the original dataset. It can be seen that during the interpolation the angle in which the
one leans changes smoothly from leaning to the right to being straight.
3.2 FREY FACES
Figure 3 shows the result of the dimensionality reduction for the Frey Face dataset. As for the
MNIST dataset, we can see a low-dimensional manifold structure. The colors indicate the three
different modes that were found. Closer investigation showed that the blue points mainly correspond
to frowning faces, whereas the green and the red points correspond to smiling faces. Figure 3b
illustrates the results for the interpolation between two smiling faces along the red line in Figure 3a.
Again, the first image (top left) is the original start image, and the bottom right is the original end
image, whereas the images inbetween are interpolated. Results show that the head slowly turns from
1Obtained with kind permission from Brandon Frey, University of Toronto.
2
−4−2024−4−202−6−4−202Workshop track - ICLR 2016
(a) The ridge for the Frey Faces dataset
(b) Interpolation between smiling images
Figure 3: Results for interpolating between smiling images
(a) The ridge for the Frey Faces dataset
(b) Interpolation between smiling and frowning
images
Figure 4: Results for interpolating between smiling and frowning images
looking to the right to looking to the left. Note that all images appear to be realistic images. Figure
4b illustrates the results for the interpolation between the smiling and the frowning face. The path
along the ridge can be seen in Figure 4a. Again, the first and last image correspond to the original
images, whereas the images inbetween correspond to the interpolation along the geodesic path. The
results show a smooth interpolation from frowning to smiling.
To end the experiments we include that the mean square errors between the input images and the
MLP reconstructions shows that it is in fact useful. MSE MLP: 0.0094 and 366.14 and MSE PCA:
0, 06 and 2.587 × 104 for MNIST and Frey faces respectively.
4 DISCUSSION AND CONCLUSION
In this work we propose that low dimensional manifold approaches can model complex shape vari-
ation. A limitation of the method is the fact that it requires the data to lie near a smooth low dimen-
sional manifold. However, we have visual examples where this assumption is valid for real-world
datasets. In both datasets a smooth interpolation was achieved and reconstruction produced realistic
images.
ACKNOWLEDGMENTS
We gratefully acknowledge the support of NVIDIA Corporation. This work was partially funded
by the Norwegian Research Council grant no. 239844 on developing the Next Generation Learning
Machines.
REFERENCES
Piotr Doll´ar, Vincent Rabaud, and Serge Belongie. Non-isometric manifold learning: Analysis and
an algorithm. In Proceedings of the 24th international conference on Machine learning, pp. 241–
3
3210-1-23210-1-20-2-1.5-1-0.520.511.532.521.510.50-0.5-1-1.5210-11-1.5-1-0.500.52.521.5Workshop track - ICLR 2016
248. ACM, 2007.
Christopher R Genovese, Marco Perone-Pacifico, Isabella Verdinelli, Larry Wasserman, et al. Non-
parametric ridge estimation. The Annals of Statistics, 42(4):1511–1545, 2014.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
Umut Ozertem and Deniz Erdogmus. Locally defined principal curves and surfaces. The Journal of
Machine Learning Research, 12:1249–1286, 2011.
Matineh Shaker, Jonas N Myhre, M Devrim Kaba, and Deniz Erdogmus. Invertible nonlinear clus-
ter unwrapping. In Machine Learning for Signal Processing (MLSP), 2014 IEEE International
Workshop on, pp. 1–6. IEEE, 2014.
4
| success |
|
p8jp5lzPWSnQVOGWfpDD | On-the-fly Network Pruning for Object Detection | [
"Marc Masana",
"Joost van de Weijer",
"Andrew D. Bagdanov"
] | Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image. These bounding boxes are highly correlated since they originate from the same image. In this paper we investigate how to exploit feature occurrence at the image scale to prune the neural network which is subsequently applied to all bounding boxes. We show that removing units which have near-zero activation in the image allows us to significantly reduce the number of parameters in the network. Results on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of units in some fully-connected layers can be entirely eliminated with little change in the detection result. | [
"network",
"image",
"units",
"object detection",
"deep neural networks",
"candidate bounding boxes",
"deep neural network",
"boxes",
"feature occurrence"
] | https://openreview.net/pdf?id=p8jp5lzPWSnQVOGWfpDD | https://openreview.net/forum?id=p8jp5lzPWSnQVOGWfpDD | ICLR.cc/2016/workshop | 2016 | {
"note_id": [
"vlpGAEnXZi7OYLG5inzJ",
"L7VQ0qN3niRNGwArs4go",
"ZY9jnY4GWu5Pk8ELfEKQ",
"k80JOkDAKsOYKX7ji4NV"
],
"note_type": [
"review",
"review",
"official_review",
"comment"
],
"note_created": [
1457161668257,
1457746036006,
1456956474346,
1458232806512
],
"note_signatures": [
[
"ICLR.cc/2016/workshop/paper/132/reviewer/10"
],
[
"~Jeff_Donahue1"
],
[
"~Christian_Szegedy1"
],
[
"~Marc_Masana_Castrillo1"
]
],
"structured_content_str": [
"{\"title\": \"Review_10: On-the-fly Network Pruning for Object Detection\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper presents methods to reduce the number of parameters of network for proposal based object detector (e.g., R-CNN), which can potentially accelerate the inference. The proposed method prune the network based on the network activation of each image, and then a smaller network can be applied to all different object proposals in an image. It is based on the assumption that network units with zero activation on the whole image cannot have nonzero activation on any object proposal in the image. Backward and forward pruning methods are proposed to prune the unit with zero or near zero activation. Experiments are done on the PASCAL 2007 to show that the pruning does not degrade the performance significantly.\", \"pros\": [\"Proposed methods are simple and well described.\"], \"cons\": [\"The key assumption does not have theoretical proof, or experimental support.\", \"There is no baseline comparisons in the experiments. It is not clear if a random pruning will be as effective as the proposed methods.\", \"The proposed methods are designed for detectors that evaluates each proposals independently, but it is based on the fast R-CNN, which obsoletes this routine (see the RoI pooling layer). The computation of all the convolutional layers are shared in fast R-CNN. This makes it less interesting to apply the proposed methods on the convolutional layers.\", \"Actually, only the experiments on the full connected layers are shown.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review of On-the-fly Network Pruning for Object Detection\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper presents two methods of reducing the number of parameters in a ReLU-based convnet based on pruning weights that result in a high proportion of inactive (activation 0) units.\", \"pros\": \"-The method is simple, well-motivated, and well-described\\n-Computation is reduced significantly while sacrificing little to no accuracy\\n-Method is applicable to any convnet with relu activations, and could be trivially generalized from fully-connected to convolutional layers\", \"cons\": \"The experiments are somewhat limited in that the pruning trick is evaluated on just two layers of one network for one problem, and the more recent detection approaches (Fast(er) R-CNN) do not have the same degree of issues with evaluating many proposals that R-CNN did, due to the ROI Pooling layer (first proposed in SPP)\\n\\nThough the evaluation is limited and addresses a problem that isn't as big of an issue now as it once was, the method is general enough to be worth readers' time for the short paper. Furthermore, my expectations for evaluation aren't as high for a workshop paper than in other venues, so I don't see the evaluation as being too much of a drawback for this work\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review of On-the-fly Network Pruning for Object Detection\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This extended abstract proposes pruning methodology for the weights of an already trained deep neural network for object detection. This particular method applies to R-CNN style object detection approach, where the same network is applied to a lot of proposals. The paper hypothesizes that if the post-classifier network yields zero values for some activations on the whole image, then the same unit will never give non-zero values when network is applied on any of the proposals. This suggests a recursive algorithm to prune the network weight matrices based on the activations of the network on the whole image. The paper presents two pruning strategies, the first one guarantees equivalent activation output on the whole image. The second one is an approximate version that might change the output of the network. The various pruning methodologies are then evaluated on the VOC detection benchmark and demonstrated to be able to prune up to 60% of the weight matrices without effecting the overall quality of the output on the proposals significantly.\", \"the_positive\": [\"The idea is sound and is relatively easy to implement for the R-CNN setup.\"], \"the_negative\": [\"The idea is based on an assumption that is not justified theoretically. The practical evidence for the activation is not presented in the abstract, but assumed silently.\", \"The traditional R-CNN method performs poorly already on the small objects. The expected failure mode of this method is also on the small objects, so the comparison graphs do not have the potenetial to measure this failure mode easily.\", \"The traditional R-CNN method of applying the post-classifier in separation has been obsoleted by applying SPP in the Faster R-CNN setup. The gains theoretically achievable by this algorithm are not very relevant in the big picture since SPP pools features from the globally applied network activations anyways.\", \"The idea is very specific to a special type of (already obsolete) detection procedure and is not likely to generalize to settings other than this.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to reviews\", \"comment\": \"Thanks everyone for your reviews.\\n\\nThe reviewers are right to point out that much of the computation is shared in the recent Faster-RCNN proposal (our detector follows exactly this Fast RCNN architecture). However, the fully connected layers (fc6, fc7 and fc8) must still be evaluated for all bounding box proposals, and these are the layers (fc6 and fc8) for which we show results. Even if their computational load in the original network is less than that of the convolutional layers, the fact that their evaluation must be repeated for each bounding box makes reduction of their computation very relevant. For small problems, fc8 reduction is irrelevant, but becomes relevant for problems with very many class problems.\\n\\nAlso in more modern architectures like Deep Residual Learning (He et al. arXiv 2015) where the fully connected layers are replaced by convolutional layers, the idea of our proposal could be applied. In the Deep Residual Network paper the first 91 convolutional layers are shared, but for every bounding box proposal the 9 remaining fully convolutional layers (conv5-) are computed. On these layers a pruning technique similar to the one we propose could be applied to prune filters resulting in feature maps with insignificant response (near zero).\"}"
]
} | Workshop track - ICLR 2016
ON-THE-FLY NETWORK PRUNING
FOR OBJECT DETECTION
Marc Masana, Joost van de Weijer & Andrew D. Bagdanov
Computer Vision Centre
Universitat Aut`onoma de Barcelona
Barcelona, 08193, Spain
{mmasana,joost,bagdanov}@cvc.uab.cat
ABSTRACT
Object detection with deep neural networks is often performed by passing a few
thousand candidate bounding boxes through a deep neural network for each image.
These bounding boxes are highly correlated since they originate from the same
image. In this paper we investigate how to exploit feature occurrence at the image
scale to prune the neural network which is subsequently applied to all bounding
boxes. We show that removing units which have near-zero activation in the image
allows us to significantly reduce the number of parameters in the network. Results
on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of
units in some fully-connected layers can be entirely eliminated with little change
in the detection result.
1
INTRODUCTION
Deep neural networks are often trained for recognition problems over very many labels. This is
partially to ensure wide applicability of the network and partially because networks are known to
benefit from multi-label data (additional training examples from one class can increase performance
of another class because they share features among several layers). At testing time, however, one
might want to apply the neural network to a collection of examples which are highly correlated.
They only contain a limited subset of the original labels and consequently will result in sparse node
activations in the network. In these cases, application of the full neural network to the whole col-
lection results in a considerable amount of wasted computation. In this paper we describe a method
for pruning of neural networks based on analysis of internal unit activations with the objective of
constructing more efficient networks.
In computer vision many problems have the structure described above. We briefly mention two
here. Imagine you want to classify the semantic content in each frame (an example) of a video
(the collection). A fast assessment of the video might reveal that it is an indoor birthday party.
This knowledge might exclude many of the nodes in the neural network – those which correspond to
’snow’, ’leopards’, and ’rivers’, for example, will be unlikely to be needed in any of the thousands of
frames in this video. Another example is object detection, where we extract thousands of bounding
boxes (examples) from a single image (the collection) with the aim of locating all semantic objects
in the image. Given an assessment of the image, we have knowledge of the node activations for the
entire collection, and based on this we can propose a smaller network which is subsequently applied
to the thousands of bounding boxes. We will here only consider the latter example in more detail.
Reducing the size and complexity of neural networks (or network compression) enjoys a long his-
tory in the learning community. The authors of Bucila et al. (2006) train a simpler neural network
to mimic the output of a complex one, and in Ba & Caruana (2014) the authors compress deep and
wide (i.e. with many feature maps) networks to shallow but wider ones. The technique of Knowl-
edge Distillation was introduced in Hinton et al. (2015) as a model compression framework. The
framework compresses an ensemble of deep networks (teacher) into a student network of similar
depth. More recently, the FitNets approach leverages the Knowledge Distillation framework to ex-
ploit depth and train student networks that are thin but remain deep( Romero et al. (2014)). Another
network compression strategy was proposed in Girshick (2015); Xue et al. (2013) that uses singu-
1
Workshop track - ICLR 2016
lar value decomposition to reduce the rank of weight matrices in fully connected layers in order to
improve efficiency.
In this paper we are not interested in mimicking the operation of a deep neural network over all
examples and all classes (as in the student-teacher compression paradigm common in the literature).
Rather, our approach is to make a quick assessment of image content and then, based on analysis of
unit activation on entire image, to modify the network to use only those units likely to contribute to
correct classification of labels of interest when applied to each candidate bounding box.
2 FORWARD AND BACKWARD UNIT PRUNING FOR OBJECT DETECTION
Consider the original neural network f (x; θ),
where θ are the network parameters. We wish
to compute a network defined by parameters θ∗
for which:
f (x; θ∗) ≈ f (x; θ) ∀x ∈ C
(1)
where |θ∗| < |θ| (i.e. the number of parameters
in θ∗ is considerably lower than in the origi-
nal network. In the case of object detection we
will use the unit activations of the entire image
to prune the network which will be applied to
all the bounding box proposals. This is based
on the observation that for some layers, nodes
with zero activations on the whole image can-
not have nonzero activation on any bounding
box in the image.
The hidden layer activation of a fully connected
layer k can be written as:
hk (x) = relu(bk + Wkhk−1 (x))
(2)
where bk and W are the biases and weights of
the k-th layer, and relu(·) indicates the rectified
linear activation function. We first consider how knowledge of the absence of node activations in the
image can be translated into a network with fewer parameters. We consider two cases: backward
and forward unit pruning, as illustrated in Fig. 1.
Figure 1: Example of backward and forward unit
pruning. We use (cid:107).(cid:107) to indicate the relu (.) activa-
tion function. Based on knowledge that some unit
activations hk (x) are zero (indicated in green),
we can reduce the parameters of Wk, Wk+1 and
bk (indicated in red).
Backward unit pruning: Without loss of generality, we order the activations in layer hk so that
the q non-active, zero nodes are at the end of vector hk. Then we can write:
(cid:104)
(cid:16)(cid:104)
(cid:105)(cid:17)
(cid:104)
(cid:105)
(cid:105)
hk (x)1:(n−q) ; 0q,1
= relu
Wk
1:(n−q),1:m; 0q,m
hk−1 (x) +
bk
; 0q,1
1:(n−q)
(3)
where we use 0m,n to indicate the zero-matrix of dimension m by n, and subscripts are used to
indicate a selection of indices from the original vector or matrix. We use [., .] for horizontal and [.; .]
for vertical concatenation (following Matlab convention). Eq. 3 shows that backward unit pruning
allows us to remove from Wk and bk an equal amount of rows as there are zeros in hk – without
changing the output of the network.
Forward unit pruning: Here we look how the zeros in the activation hk can be exploited to
remove parameters from the following layer. The activation in layer k + 1 can be written:
hk+1 (x) = relu
(cid:16)(cid:104)
Wk+1
1:p,1:(n−q), 0p,q
(cid:105) (cid:104)
hk (x)1:(p−q) ; 0q,1
(cid:105)
+ bk+1(cid:17)
(4)
In this case, the zeros in hk result in the removal of columns from Wk+1. These can be removed
without changing the output of the network.
In practice there might only be a few zero activation in the image and therefore we consider all node
activations which are below a certain threshold to be zero1. This allows us to further increase the
1In case the activation function is not the ReLU one should consider the absolute value of the activation
function to be smaller than a threshold.
2
+=+=backwardrowremovalforward columnremoval1khxkW1kWkhxkhx1khx1kbxkbxnmnpWorkshop track - ICLR 2016
parameter reduction of the network f (x; θ∗) but at the cost of slight deviations from the original net-
work f (x; θ). We also note that although notations are about fully-connected layers for simplicity,
our proposal would also be applicable to convolutional layers too.
3 RESULTS AND CONCLUSIONS
We evaluate our proposed methods on the VOC PASCAL 2007 dataset (Everingham et al. (2010))
with the fast R-CNN framework by Girshick (2015). The VOC 2007 has a total of 24,640 annotated
objects for training, with an average of 2.5 objects per image, and in the test set an average of 2.4
objects per image. The Fast R-CNN framework is fit for our purposes since it first passes the image
through all the convolutional layers to later use the extracted feature maps with the corresponding
bounding boxes which we want to evaluate (usually 1,000+ boxes). The network used a modification
of the VGG16 network (Simonyan & Zisserman (2014)).
Our first experiment uses forward unit pruning on the pool5 layer of the
Forward pruning.
VGG16 network to reduce the number of parameters of the fc6 layer. This is the layer with highest
percentage of parameters (38.7% parameters in the network). The pool5 layer has 512 × 7 × 7
outputs, where the first dimension represents the feature maps, and the second and third dimensions
are spatial dimensions (smaller than the original image size because of the resizing at each pooling
layer). In order to decide which activations to prune, we first pass the whole image through the
network and observe the activations at each unit in pool5. We sum over the spatial dimensions and
apply a threshold to select units to prune from the network before applying it to all bounding boxes.
Results show an initial minor improvement in the
performance of the framework when removing
parameters (see Fig. 2). The lack of propagation
through the network of very low value activations
could be the cause of the small difference in per-
formance. Then, for reductions of 25-40% of the
parameters on layer fc6, we obtain a mAP loss of
less than 1. From that point on, further removal of
parameters leads to higher loss. This happens be-
cause the activations removed start to be too rele-
vant for the network’s discriminative power.
Figure 2: Performance loss as a function of pa-
rameter reduction.
Backward pruning. The second experiment ap-
plies backward unit pruning to the fc8 layer to re-
duce the number of parameters from the weight
and bias matrices used to compute the network
outputs.
In this case, we use an image classi-
fier (VGG16 deep features based) to decide which
classes (activations) would be more likely to ap-
pear in the original image. Based on that classifi-
cation, we adopt a top-N strategy where we keep
the N classes with higher probability from the image classifier and remove the rest. This reduction
affects the weight and bias matrices of the fc8, which would no longer propagate into the follow-
ing layers (the softmax in this case). In this case, results keeping 6 or more classes (reductions of
0-70%) show a mAP loss of less than 1. However, performance starts dropping after because of im-
ages having more classes present than classes kept. It should be noted that only a small percentage
of the total parameters of the network are in fc8. However, when considering object detection with
thousands of classes, the relevance of this layer is comparable to fc6.
Conclusions. We have presented a method to prune units in neural networks for object detection
through analysis of unit activation on the entire image. We show that for some layers up to 40% of the
parameters can be removed with minimal impact on performance. We are interested in combining
our method with other parameter reduction methods such as Xue et al. (2013). Also applying our
method to other types of layers (e.g. convolutional) and evaluating on datasets with very many labels
are promising research directions. In addition, we are interested in applying our method to semantic
segmentation where, similarly as in our problem, a redundant network is applied to every pixel.
3
Workshop track - ICLR 2016
REFERENCES
Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in neural informa-
tion processing systems (NIPS), pp. 2654–2662, 2014.
C Bucila, R Caruana, and A Niculescu-Mizil. Model compression: Making big, slow models practi-
cal. In Proc. of the 12th International Conf. on Knowledge Discovery and Data Mining (KDD06),
2006.
Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman.
The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):
303–338, 2010.
Ross Girshick. Fast r-cnn.
In Proceedings of the IEEE International Conference on Computer
Vision, pp. 1440–1448, 2015.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv
preprint arXiv:1503.02531, 2015.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and
Yoshua Bengio. Fitnets: Hints for thin deep nets. CoRR, abs/1412.6550, 2014. URL http:
//arxiv.org/abs/1412.6550.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556, 2014.
Jian Xue, Jinyu Li, and Yifan Gong. Restructuring of deep neural network acoustic models with
singular value decomposition. In INTERSPEECH, pp. 2365–2369, 2013.
4
| success |
|
r8lrv9B0zu8wknpYt57Y | Data Cleaning by Deep Dictionary Learning | [
"Zhongqi Lu",
"Qiang Yang"
] | "The soundness of training data is important to the performance of a learning model. However in reco(...TRUNCATED) | ["training data","users","dictionary","deep dictionary","recommender systems","feedback","noise","da(...TRUNCATED) | https://openreview.net/pdf?id=r8lrv9B0zu8wknpYt57Y | https://openreview.net/forum?id=r8lrv9B0zu8wknpYt57Y | ICLR.cc/2016/workshop | 2016 | {"note_id":["E8VzjZ9o9S31v0m2iDp3","mO9D7v9wwij1gPZ3UlB6","5QzB3xwlnHZgXpo7i323"],"note_type":["revi(...TRUNCATED) | "Workshop track - ICLR 2016\n\nDATA CLEANING BY DEEP DICTIONARY LEARNING\n\nZhongqi Lu & Qiang Yang\(...TRUNCATED) | success |
End of preview. Expand
in Data Studio
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
spiralworks/openreview-2025-iclr-final-v3
This dataset contains OpenReview papers with structured reviews. Each batch is stored as a separate parquet file.
Usage
from datasets import load_dataset
import pandas as pd
# Load specific batch(es)
ds = load_dataset("spiralworks/openreview-2025-iclr-final-v3", data_files="data/train-00000.parquet")
# OR load all batches
ds = load_dataset("spiralworks/openreview-2025-iclr-final-v3", data_files="data/*.parquet")
# Convert to pandas for easy viewing
df = ds["train"].to_pandas()
- Downloads last month
- 20