title
stringlengths
2
153
abstract
stringlengths
0
2.15k
review
stringlengths
97
8.43k
source
stringclasses
4 values
review_src
stringclasses
1 value
thinking_trace
stringclasses
1 value
conversations
listlengths
2
2
review_sentences
sequencelengths
1
61
criteria_count
dict
criticism
float64
0
1
example
float64
0
1
importance_and_relevance
float64
0
1
materials_and_methods
float64
0
1
praise
float64
0
1
presentation_and_reporting
float64
0
1
results_and_discussion
float64
0
1
suggestion_and_solution
float64
0
1
total
int64
1
61
has_criticism
int64
0
1
has_example
int64
0
1
has_importance_and_relevance
int64
0
1
has_materials_and_methods
int64
0
1
has_praise
int64
0
1
has_presentation_and_reporting
int64
0
1
has_results_and_discussion
int64
0
1
has_suggestion_and_solution
int64
0
1
reward_value
float64
2.13
6
reward_value_length_adjusted
float64
-34.13
3.73
length_penalty
float64
0
36.4
Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension
Integrating distributed representations with symbolic operations is essential for reading comprehension requiring complex reasoning, such as counting, sorting and arithmetics, but most existing approaches are hard to scale to more domains or more complex reasoning. In this work, we propose the Neural Symbolic Reader (NeRd), which includes a reader, e.g., BERT, to encode the passage and question, and a programmer, e.g., LSTM, to generate a program that is executed to produce the answer. Compared to previous works, NeRd is more scalable in two aspects: (1) domain-agnostic, i.e., the same neural architecture works for different domains; (2) compositional, i.e., when needed, complex programs can be generated by recursively applying the predefined operators, which become executable and interpretable representations for more complex reasoning. Furthermore, to overcome the challenge of training NeRd with weak supervision, we apply data augmentation techniques and hard Expectation-Maximization (EM) with thresholding. On DROP, a challenging reading comprehension dataset that requires discrete reasoning, NeRd achieves 2.5%/1.8% absolute improvement over the state-of-the-art on EM/F1 metrics. With the same architecture, NeRd significantly outperforms the baselines on MathQA, a math problem benchmark that requires multiple steps of reasoning, by 25.5% absolute increment on accuracy when trained on all the annotated programs. More importantly, NeRd still beats the baselines even when only 20% of the program annotations are given.
This paper discusses an extended DSL language for answering complex questions from text and adding data augmentation as well as weak supervision for training an encoder/decoder model where the encoder is a language model and decoder a program synthesis machine generating instructions using the DSL. They show interesting results on two datasets requiring symbolic reasoning for answering the questions. Overall, I like the paper and I think it contains simple extensions to previous methods referenced in the paper enabling them to work well on these datasets. A few comments: 1 - Would be interesting to see the performance of the weak supervision on these datasets. In other words, if the heuristics are designed to provide noisy instruction sets for training, we need to see the performance of those on these datasets to determine if the models are generalising beyond those heuristics or they perform at the same level which then may mean we don*t need the model. 2 - From Tab 4, it seems the largest improvements are due to span-selection cases as a result of adding span operators to the DSL. A deep dive on this would be a great insight (in addition to performance improvement statements on page 7). 3 - Since the span and value operators require indices to the text location, could you please clarify in the text how that is done? Do LSTMs output the indices or are you selecting from a preselection spans as part of preprocessing?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension\nIntegrating distributed representations with symbolic operations is essential for reading comprehension requiring complex reasoning, such as counting, sorting and arithmetics, but most existing approaches are hard to scale to more domains or more complex reasoning. In this work, we propose the Neural Symbolic Reader (NeRd), which includes a reader, e.g., BERT, to encode the passage and question, and a programmer, e.g., LSTM, to generate a program that is executed to produce the answer. Compared to previous works, NeRd is more scalable in two aspects: (1) domain-agnostic, i.e., the same neural architecture works for different domains; (2) compositional, i.e., when needed, complex programs can be generated by recursively applying the predefined operators, which become executable and interpretable representations for more complex reasoning. Furthermore, to overcome the challenge of training NeRd with weak supervision, we apply data augmentation techniques and hard Expectation-Maximization (EM) with thresholding. On DROP, a challenging reading comprehension dataset that requires discrete reasoning, NeRd achieves 2.5%/1.8% absolute improvement over the state-of-the-art on EM/F1 metrics. With the same architecture, NeRd significantly outperforms the baselines on MathQA, a math problem benchmark that requires multiple steps of reasoning, by 25.5% absolute increment on accuracy when trained on all the annotated programs. More importantly, NeRd still beats the baselines even when only 20% of the program annotations are given.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper discusses an extended DSL language for answering complex questions from text and adding data augmentation as well as weak supervision for training an encoder/decoder model where the encoder is a language model and decoder a program synthesis machine generating instructions using the DSL. They show interesting results on two datasets requiring symbolic reasoning for answering the questions. Overall, I like the paper and I think it contains simple extensions to previous methods referenced in the paper enabling them to work well on these datasets. A few comments: 1 - Would be interesting to see the performance of the weak supervision on these datasets. In other words, if the heuristics are designed to provide noisy instruction sets for training, we need to see the performance of those on these datasets to determine if the models are generalising beyond those heuristics or they perform at the same level which then may mean we don*t need the model. 2 - From Tab 4, it seems the largest improvements are due to span-selection cases as a result of adding span operators to the DSL. A deep dive on this would be a great insight (in addition to performance improvement statements on page 7). 3 - Since the span and value operators require indices to the text location, could you please clarify in the text how that is done? Do LSTMs output the indices or are you selecting from a preselection spans as part of preprocessing?", "role": "assistant" } ]
[ "This paper discusses an extended DSL language for answering complex questions from text and adding data augmentation as well as weak supervision for training an encoder/decoder model where the encoder is a language model and decoder a program synthesis machine generating instructions using the DSL.", "They show interesting results on two datasets requiring symbolic reasoning for answering the questions.", "Overall, I like the paper and I think it contains simple extensions to previous methods referenced in the paper enabling them to work well on these datasets.", "A few comments: 1 - Would be interesting to see the performance of the weak supervision on these datasets.", "In other words, if the heuristics are designed to provide noisy instruction sets for training, we need to see the performance of those on these datasets to determine if the models are generalising beyond those heuristics or they perform at the same level which then may mean we don*t need the model.", "2 - From Tab 4, it seems the largest improvements are due to span-selection cases as a result of adding span operators to the DSL.", "A deep dive on this would be a great insight (in addition to performance improvement statements on page 7).", "3 - Since the span and value operators require indices to the text location, could you please clarify in the text how that is done?", "Do LSTMs output the indices or are you selecting from a preselection spans as part of preprocessing?" ]
{ "criticism": 0, "example": 2, "importance_and_relevance": 0, "materials_and_methods": 8, "praise": 4, "presentation_and_reporting": 4, "results_and_discussion": 1, "suggestion_and_solution": 4, "total": 9 }
0
0.222222
0
0.888889
0.444444
0.444444
0.111111
0.444444
9
0
1
0
1
1
1
1
1
2.555556
2.303037
0.252519
Real or Not Real, that is the Question
While generative adversarial networks (GAN) have been widely adopted in various topics, in this paper we generalize the standard GAN to a new perspective by treating realness as a random variable that can be estimated from multiple angles. In this generalized framework, referred to as RealnessGAN, the discriminator outputs a distribution as the measure of realness. While RealnessGAN shares similar theoretical guarantees with the standard GAN, it provides more insights on adversarial learning. More importantly, compared to multiple baselines, RealnessGAN provides stronger guidance for the generator, achieving improvements on both synthetic and real-world datasets. Moreover, it enables the basic DCGAN architecture to generate realistic images at 1024*1024 resolution when trained from scratch.
The paper proposes to improve upon GANs by considering to infer the distribution of realness instead of binary true/false labels in the discriminator side. They carry out this idea with some theoretical arguments, and their method is shown to perform very well in empirical experiments on one synthetic dataset, and three real world datasets:CelebA, CIFAR-10, and FFHQ. Overall I feel this is a well presented paper with a simple yet interesting idea and solid results. The authors are encouraged to share their code and results to public.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nReal or Not Real, that is the Question\nWhile generative adversarial networks (GAN) have been widely adopted in various topics, in this paper we generalize the standard GAN to a new perspective by treating realness as a random variable that can be estimated from multiple angles. In this generalized framework, referred to as RealnessGAN, the discriminator outputs a distribution as the measure of realness. While RealnessGAN shares similar theoretical guarantees with the standard GAN, it provides more insights on adversarial learning. More importantly, compared to multiple baselines, RealnessGAN provides stronger guidance for the generator, achieving improvements on both synthetic and real-world datasets. Moreover, it enables the basic DCGAN architecture to generate realistic images at 1024*1024 resolution when trained from scratch.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes to improve upon GANs by considering to infer the distribution of realness instead of binary true/false labels in the discriminator side. They carry out this idea with some theoretical arguments, and their method is shown to perform very well in empirical experiments on one synthetic dataset, and three real world datasets:CelebA, CIFAR-10, and FFHQ. Overall I feel this is a well presented paper with a simple yet interesting idea and solid results. The authors are encouraged to share their code and results to public.", "role": "assistant" } ]
[ "The paper proposes to improve upon GANs by considering to infer the distribution of realness instead of binary true/false labels in the discriminator side.", "They carry out this idea with some theoretical arguments, and their method is shown to perform very well in empirical experiments on one synthetic dataset, and three real world datasets:CelebA, CIFAR-10, and FFHQ.", "Overall I feel this is a well presented paper with a simple yet interesting idea and solid results.", "The authors are encouraged to share their code and results to public." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 2, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 4 }
0
0
0.25
0.5
0.5
0.25
0.5
0.25
4
0
0
1
1
1
1
1
1
2.25
0.971623
1.278377
A Probabilistic Formulation of Unsupervised Text Style Transfer
We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.
The main contribution of this paper is a principled probabilistic framework of unsupervised sequence to sequence transfer (text to text in particular). However, I believe there is a large disconnect between the probabilistic formulation written it section 3 and whats actually happening experimentally in section 5. It is not clear whether the model is *actually* optimizing an ELBO because the gradients from sequence reconstruction loss are not backpropogated to the inference network as explained in paragraph on Approximating Gradients of ELBO. Moreover this restriction makes the authors method almost the same as the one used for unsupervised neural machine translation by Lample et al 2017 and Artetxe et al 2017. I would like to see a more detailed analysis from authors on how far the performance of Gumbel-softmax estimator and REINFORCE estimator is from simple stop-gradient estimator used in experiments. In terms of experimental setup I like that the authors considered a large suite of experiments across various tasks. Although the evaluation metrics on text style transfer tasks like sentiment transfer, formality transfer, author imitation are in line with previous work ideally the human evaluation needs to be done to truly see how well each method performs. On unsupervised machine translation, authors show a large improvement on Serbian-Bostian translation. I am a bit skeptical since as I wrote above the proposed method is very similar to previously proposed unsupervised neural machine translation approaches and it is not clear why we are seeking such a large gain of 5 BLEU points. Overall I think it is a well written paper with a large experimental suite, although I am skeptical of actual connection between probabilistic formulation and whats actually happening in practice. ================================================ Update: I have raised the score from 3 to 6.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Probabilistic Formulation of Unsupervised Text Style Transfer\nWe present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe main contribution of this paper is a principled probabilistic framework of unsupervised sequence to sequence transfer (text to text in particular). However, I believe there is a large disconnect between the probabilistic formulation written it section 3 and whats actually happening experimentally in section 5. It is not clear whether the model is *actually* optimizing an ELBO because the gradients from sequence reconstruction loss are not backpropogated to the inference network as explained in paragraph on Approximating Gradients of ELBO. Moreover this restriction makes the authors method almost the same as the one used for unsupervised neural machine translation by Lample et al 2017 and Artetxe et al 2017. I would like to see a more detailed analysis from authors on how far the performance of Gumbel-softmax estimator and REINFORCE estimator is from simple stop-gradient estimator used in experiments. In terms of experimental setup I like that the authors considered a large suite of experiments across various tasks. Although the evaluation metrics on text style transfer tasks like sentiment transfer, formality transfer, author imitation are in line with previous work ideally the human evaluation needs to be done to truly see how well each method performs. On unsupervised machine translation, authors show a large improvement on Serbian-Bostian translation. I am a bit skeptical since as I wrote above the proposed method is very similar to previously proposed unsupervised neural machine translation approaches and it is not clear why we are seeking such a large gain of 5 BLEU points. Overall I think it is a well written paper with a large experimental suite, although I am skeptical of actual connection between probabilistic formulation and whats actually happening in practice. ================================================ Update: I have raised the score from 3 to 6.", "role": "assistant" } ]
[ "The main contribution of this paper is a principled probabilistic framework of unsupervised sequence to sequence transfer (text to text in particular).", "However, I believe there is a large disconnect between the probabilistic formulation written it section 3 and whats actually happening experimentally in section 5.", "It is not clear whether the model is *actually* optimizing an ELBO because the gradients from sequence reconstruction loss are not backpropogated to the inference network as explained in paragraph on Approximating Gradients of ELBO.", "Moreover this restriction makes the authors method almost the same as the one used for unsupervised neural machine translation by Lample et al 2017 and Artetxe et al 2017.", "I would like to see a more detailed analysis from authors on how far the performance of Gumbel-softmax estimator and REINFORCE estimator is from simple stop-gradient estimator used in experiments.", "In terms of experimental setup I like that the authors considered a large suite of experiments across various tasks.", "Although the evaluation metrics on text style transfer tasks like sentiment transfer, formality transfer, author imitation are in line with previous work ideally the human evaluation needs to be done to truly see how well each method performs.", "On unsupervised machine translation, authors show a large improvement on Serbian-Bostian translation.", "I am a bit skeptical since as I wrote above the proposed method is very similar to previously proposed unsupervised neural machine translation approaches and it is not clear why we are seeking such a large gain of 5 BLEU points.", "Overall I think it is a well written paper with a large experimental suite, although I am skeptical of actual connection between probabilistic formulation and whats actually happening in practice.", "================================================ Update: I have raised the score from 3 to 6." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 3, "presentation_and_reporting": 4, "results_and_discussion": 3, "suggestion_and_solution": 2, "total": 11 }
0.363636
0
0.090909
0.636364
0.272727
0.363636
0.272727
0.181818
11
1
0
1
1
1
1
1
1
2.181818
2.118688
0.06313
Stable Rank Normalization for Improved Generalization in Neural Networks and GANs
Exciting new work on generalization bounds for neural networks (NN) given by Bartlett et al. (2017); Neyshabur et al. (2018) closely depend on two parameter- dependant quantities: the Lipschitz constant upper bound and the stable rank (a softer version of rank). Even though these bounds typically have minimal practical utility, they facilitate questions on whether controlling such quantities together could improve the generalization behaviour of NNs in practice. To this end, we propose stable rank normalization (SRN), a novel, provably optimal, and computationally efficient weight-normalization scheme which minimizes the stable rank of a linear operator. Surprisingly we find that SRN, despite being non-convex, can be shown to have a unique optimal solution. We provide extensive analyses across a wide variety of NNs (DenseNet, WideResNet, ResNet, Alexnet, VGG), where applying SRN to their linear layers leads to improved classification accuracy, while simultaneously showing improvements in genealization, evaluated empirically using—(a) shattering experiments (Zhang et al., 2016); and (b) three measures of sample complexity by Bartlett et al. (2017), Neyshabur et al. (2018), & Wei & Ma. Additionally, we show that, when applied to the discriminator of GANs, it improves Inception, FID, and Neural divergence scores, while learning mappings with low empirical Lipschitz constant.
Stable Rank Normalization for Improved Generalization in Neural Networks and GANs Summary: This paper proposes to normalize network weights using the Stable Rank, extending the method of spectral normalization. The stable rank is the ratio of the squared frobenius norm to the squared spectral norm (the top singular value). The authors motivate this decision in the context of lipschitz constraints and noise sensitivity. The proposed method (combined with spectral norm as SRN alone does not explicitly subsume SN) is tested for both classification (using a wide range of popular models on CIFAR) and GAN training. Performance (classification accuracy and FID/IS) is measured and several auxiliary investigations are performed into generalization bounds, sample complexity, and sensitivity to hyperparameters. My Take: This paper motivates and presents an interesting extension of spectral norm, and evaluates it quite well with thorough experiments in a range of settings. The method looks to be reasonably accessible to implement, although its compute cost is not properly characterized and some details (like the orthogonalization step necessary in power-iteration for more than one SV) seem to be omitted. My two main concerns are that the results, while good, are not especially strong (the relative improvement is not very high) and that the paper could be made substantially more concise to fit within the 8 page soft limit (I felt there was plenty of material that could be moved to the appendix). All in all this is a reasonably clear accept to me (7/10) that with some cleanup could be a more solid 8, and I argue in favor of acceptance. Notes -The paper should characterize the runtime difference between SRN and SN. It is presently unclear how computationally intensive the method is. What is the difference in time per training iteration? The authors should also indicate their hardware setup and overall training time. -I found table 1 confusing as it lacks the test error. Are the test errors the same for all these models and the authors are just showing that for certain settings the SRN models have higher training error? If there is a difference in testing error, then this table is misleading, as one cares little about the training error if the test errors vary. If the test errors are approximately the same, then why should I care if the training error is higher? This would just be a way to decrease the stated “generalization gap,” which is not necessarily indicative of a better model (vis-à-vis the commonly held misconception that comparing training error between models is properly indicative of relative overfitting). -Nowhere (that I could spot) in the body of the paper is it explained what “Stable-50”, “SRN-50”, and “SRN-50%” are. I assume these all mean the same thing and it refers to the choice of the c hyperparameter, but this should be explicitly stated so that the reader knows which model corresponds to which settings. Minor -The footnotes appear to be out of order, footnote 1 appears on page 9 -There are typos such as “simiplicity,” please proofread thoroughly.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nStable Rank Normalization for Improved Generalization in Neural Networks and GANs\nExciting new work on generalization bounds for neural networks (NN) given by Bartlett et al. (2017); Neyshabur et al. (2018) closely depend on two parameter- dependant quantities: the Lipschitz constant upper bound and the stable rank (a softer version of rank). Even though these bounds typically have minimal practical utility, they facilitate questions on whether controlling such quantities together could improve the generalization behaviour of NNs in practice. To this end, we propose stable rank normalization (SRN), a novel, provably optimal, and computationally efficient weight-normalization scheme which minimizes the stable rank of a linear operator. Surprisingly we find that SRN, despite being non-convex, can be shown to have a unique optimal solution. We provide extensive analyses across a wide variety of NNs (DenseNet, WideResNet, ResNet, Alexnet, VGG), where applying SRN to their linear layers leads to improved classification accuracy, while simultaneously showing improvements in genealization, evaluated empirically using—(a) shattering experiments (Zhang et al., 2016); and (b) three measures of sample complexity by Bartlett et al. (2017), Neyshabur et al. (2018), & Wei & Ma. Additionally, we show that, when applied to the discriminator of GANs, it improves Inception, FID, and Neural divergence scores, while learning mappings with low empirical Lipschitz constant.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nStable Rank Normalization for Improved Generalization in Neural Networks and GANs Summary: This paper proposes to normalize network weights using the Stable Rank, extending the method of spectral normalization. The stable rank is the ratio of the squared frobenius norm to the squared spectral norm (the top singular value). The authors motivate this decision in the context of lipschitz constraints and noise sensitivity. The proposed method (combined with spectral norm as SRN alone does not explicitly subsume SN) is tested for both classification (using a wide range of popular models on CIFAR) and GAN training. Performance (classification accuracy and FID/IS) is measured and several auxiliary investigations are performed into generalization bounds, sample complexity, and sensitivity to hyperparameters. My Take: This paper motivates and presents an interesting extension of spectral norm, and evaluates it quite well with thorough experiments in a range of settings. The method looks to be reasonably accessible to implement, although its compute cost is not properly characterized and some details (like the orthogonalization step necessary in power-iteration for more than one SV) seem to be omitted. My two main concerns are that the results, while good, are not especially strong (the relative improvement is not very high) and that the paper could be made substantially more concise to fit within the 8 page soft limit (I felt there was plenty of material that could be moved to the appendix). All in all this is a reasonably clear accept to me (7/10) that with some cleanup could be a more solid 8, and I argue in favor of acceptance. Notes -The paper should characterize the runtime difference between SRN and SN. It is presently unclear how computationally intensive the method is. What is the difference in time per training iteration? The authors should also indicate their hardware setup and overall training time. -I found table 1 confusing as it lacks the test error. Are the test errors the same for all these models and the authors are just showing that for certain settings the SRN models have higher training error? If there is a difference in testing error, then this table is misleading, as one cares little about the training error if the test errors vary. If the test errors are approximately the same, then why should I care if the training error is higher? This would just be a way to decrease the stated “generalization gap,” which is not necessarily indicative of a better model (vis-à-vis the commonly held misconception that comparing training error between models is properly indicative of relative overfitting). -Nowhere (that I could spot) in the body of the paper is it explained what “Stable-50”, “SRN-50”, and “SRN-50%” are. I assume these all mean the same thing and it refers to the choice of the c hyperparameter, but this should be explicitly stated so that the reader knows which model corresponds to which settings. Minor -The footnotes appear to be out of order, footnote 1 appears on page 9 -There are typos such as “simiplicity,” please proofread thoroughly.", "role": "assistant" } ]
[ "Stable Rank Normalization for Improved Generalization in Neural Networks and GANs Summary: This paper proposes to normalize network weights using the Stable Rank, extending the method of spectral normalization.", "The stable rank is the ratio of the squared frobenius norm to the squared spectral norm (the top singular value).", "The authors motivate this decision in the context of lipschitz constraints and noise sensitivity.", "The proposed method (combined with spectral norm as SRN alone does not explicitly subsume SN) is tested for both classification (using a wide range of popular models on CIFAR) and GAN training.", "Performance (classification accuracy and FID/IS) is measured and several auxiliary investigations are performed into generalization bounds, sample complexity, and sensitivity to hyperparameters.", "My Take: This paper motivates and presents an interesting extension of spectral norm, and evaluates it quite well with thorough experiments in a range of settings.", "The method looks to be reasonably accessible to implement, although its compute cost is not properly characterized and some details (like the orthogonalization step necessary in power-iteration for more than one SV) seem to be omitted.", "My two main concerns are that the results, while good, are not especially strong (the relative improvement is not very high) and that the paper could be made substantially more concise to fit within the 8 page soft limit (I felt there was plenty of material that could be moved to the appendix).", "All in all this is a reasonably clear accept to me (7/10) that with some cleanup could be a more solid 8, and I argue in favor of acceptance.", "Notes -The paper should characterize the runtime difference between SRN and SN.", "It is presently unclear how computationally intensive the method is.", "What is the difference in time per training iteration?", "The authors should also indicate their hardware setup and overall training time.", "-I found table 1 confusing as it lacks the test error.", "Are the test errors the same for all these models and the authors are just showing that for certain settings the SRN models have higher training error?", "If there is a difference in testing error, then this table is misleading, as one cares little about the training error if the test errors vary.", "If the test errors are approximately the same, then why should I care if the training error is higher?", "This would just be a way to decrease the stated “generalization gap,” which is not necessarily indicative of a better model (vis-à-vis the commonly held misconception that comparing training error between models is properly indicative of relative overfitting).", "-Nowhere (that I could spot) in the body of the paper is it explained what “Stable-50”, “SRN-50”, and “SRN-50%” are.", "I assume these all mean the same thing and it refers to the choice of the c hyperparameter, but this should be explicitly stated so that the reader knows which model corresponds to which settings.", "Minor -The footnotes appear to be out of order, footnote 1 appears on page 9 -There are typos such as “simiplicity,” please proofread thoroughly." ]
{ "criticism": 5, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 13, "praise": 2, "presentation_and_reporting": 9, "results_and_discussion": 6, "suggestion_and_solution": 8, "total": 21 }
0.238095
0.047619
0.047619
0.619048
0.095238
0.428571
0.285714
0.380952
21
1
1
1
1
1
1
1
1
2.142857
1.132782
1.010076
Measuring the Reliability of Reinforcement Learning Algorithms
Inadequate reliability is a well-known issue for reinforcement learning (RL) algorithms. This problem has gained increasing attention in recent years, and efforts to improve it have grown substantially. To aid RL researchers and production users with the evaluation and improvement of reliability, we propose a novel set of metrics that quantitatively measure different aspects of reliability. In this work, we address variability and risk, both during training and after learning (on a fixed policy). We designed these metrics to be general-purpose, and we also designed complementary statistical tests to enable rigorous comparisons on these metrics. In this paper, we first describe the desired properties of the metrics and their design, the aspects of reliability that they measure, and their applicability to different scenarios. We then describe the statistical tests and make additional practical recommendations for reporting results. Finally, we apply our metrics to a set of common RL algorithms and environments, compare them, and analyze the results.
In this paper, the authors study an important problem in the area of reinforcement learning (RL). Specifically, the authors focus on how to evaluate the reliability of RL algorithms, in particular of the deep RL algorithms. The paper is well motivated by providing convincing justification of evaluating the RL algorithms properly. In particular, the authors define seven specific evaluation metrics, including *Dispersion across Time (DT): IQR across Time*, *Short-term Risk across Time (SRT): CVaR on Differences*, *Long-term Risk across Time (LRT): CVaR on Drawdown*, *Dispersion across Runs (DR): IQR across Runs*, *Risk across Runs (RR): CVaR across Runs*, *Dispersion across Fixed-Policy Rollouts (DF): IQR across Rollouts* and *Risk across Fixed-Policy Rollouts (RF): CVaR across Rollouts*, from a two-dimension analysis shown in Table 1. Moreover, the authors apply the proposed evaluation metrics to some typical RL algorithms and environments, and provide some insightful discussions and analysis. Overall, the paper is well presented though it is somehow different from a typical technical paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMeasuring the Reliability of Reinforcement Learning Algorithms\nInadequate reliability is a well-known issue for reinforcement learning (RL) algorithms. This problem has gained increasing attention in recent years, and efforts to improve it have grown substantially. To aid RL researchers and production users with the evaluation and improvement of reliability, we propose a novel set of metrics that quantitatively measure different aspects of reliability. In this work, we address variability and risk, both during training and after learning (on a fixed policy). We designed these metrics to be general-purpose, and we also designed complementary statistical tests to enable rigorous comparisons on these metrics. In this paper, we first describe the desired properties of the metrics and their design, the aspects of reliability that they measure, and their applicability to different scenarios. We then describe the statistical tests and make additional practical recommendations for reporting results. Finally, we apply our metrics to a set of common RL algorithms and environments, compare them, and analyze the results.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nIn this paper, the authors study an important problem in the area of reinforcement learning (RL). Specifically, the authors focus on how to evaluate the reliability of RL algorithms, in particular of the deep RL algorithms. The paper is well motivated by providing convincing justification of evaluating the RL algorithms properly. In particular, the authors define seven specific evaluation metrics, including *Dispersion across Time (DT): IQR across Time*, *Short-term Risk across Time (SRT): CVaR on Differences*, *Long-term Risk across Time (LRT): CVaR on Drawdown*, *Dispersion across Runs (DR): IQR across Runs*, *Risk across Runs (RR): CVaR across Runs*, *Dispersion across Fixed-Policy Rollouts (DF): IQR across Rollouts* and *Risk across Fixed-Policy Rollouts (RF): CVaR across Rollouts*, from a two-dimension analysis shown in Table 1. Moreover, the authors apply the proposed evaluation metrics to some typical RL algorithms and environments, and provide some insightful discussions and analysis. Overall, the paper is well presented though it is somehow different from a typical technical paper.", "role": "assistant" } ]
[ "In this paper, the authors study an important problem in the area of reinforcement learning (RL).", "Specifically, the authors focus on how to evaluate the reliability of RL algorithms, in particular of the deep RL algorithms.", "The paper is well motivated by providing convincing justification of evaluating the RL algorithms properly.", "In particular, the authors define seven specific evaluation metrics, including *Dispersion across Time (DT): IQR across Time*, *Short-term Risk across Time (SRT): CVaR on Differences*, *Long-term Risk across Time (LRT): CVaR on Drawdown*, *Dispersion across Runs (DR): IQR across Runs*, *Risk across Runs (RR): CVaR across Runs*, *Dispersion across Fixed-Policy Rollouts (DF): IQR across Rollouts* and *Risk across Fixed-Policy Rollouts (RF): CVaR across Rollouts*, from a two-dimension analysis shown in Table 1.", "Moreover, the authors apply the proposed evaluation metrics to some typical RL algorithms and environments, and provide some insightful discussions and analysis.", "Overall, the paper is well presented though it is somehow different from a typical technical paper." ]
{ "criticism": 1, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 6 }
0.166667
0.166667
0.333333
0.666667
0.666667
0.166667
0.166667
0
6
1
1
1
1
1
1
1
0
2.333333
1.559994
0.773339
The intriguing role of module criticality in the generalization of deep networks
We study the phenomenon that some modules of deep neural networks (DNNs) are more emph{critical} than others. Meaning that rewinding their parameter values back to initialization, while keeping other modules fixed at the trained parameters, results in a large drop in the network*s performance. Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called {em module criticality}, based on the shape of the valleys that connects the initial and final values of the module parameters. We formulate how generalization relates to the module criticality, and show that this measure is able to explain the superior generalization performance of some architectures over others, whereas earlier measures fail to do so.
The paper builds upon the *module criticality* phenomenon and proposes a quantitative approach to measure this at the module and the network level. A module*s criticality is low if when it is switched to its initialization value, the error does not change drastically. The paper uses a convex combination of the initial weights and the final weights of a layer/module to define an optimization path to traverse. The authors quantitatively define the module criticality such that it depends on how much closer the weights can get to the initial weights on this path while still being robust to random permutations. The network critically is defined as the sum of the module criticality measure of all the layers. Empirical results on CIFAR10 show that the network*s criticality is reflective of the generalization performance. For example, increasing resnet depth leads to improved generalization and low criticality. Though intuitively, it is not clear why moving closer to the initial values and thus lower average criticality indicated better generalization. It will be useful ot have a discussion on this issue. Results on other datasets will also be useful. Overall, the network criticality measure appears a useful tool to predict the generalization performance compared to other measures such as distance from initialization, weight spectrum, and others.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe intriguing role of module criticality in the generalization of deep networks\nWe study the phenomenon that some modules of deep neural networks (DNNs) are more emph{critical} than others. Meaning that rewinding their parameter values back to initialization, while keeping other modules fixed at the trained parameters, results in a large drop in the network*s performance. Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called {em module criticality}, based on the shape of the valleys that connects the initial and final values of the module parameters. We formulate how generalization relates to the module criticality, and show that this measure is able to explain the superior generalization performance of some architectures over others, whereas earlier measures fail to do so.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper builds upon the *module criticality* phenomenon and proposes a quantitative approach to measure this at the module and the network level. A module*s criticality is low if when it is switched to its initialization value, the error does not change drastically. The paper uses a convex combination of the initial weights and the final weights of a layer/module to define an optimization path to traverse. The authors quantitatively define the module criticality such that it depends on how much closer the weights can get to the initial weights on this path while still being robust to random permutations. The network critically is defined as the sum of the module criticality measure of all the layers. Empirical results on CIFAR10 show that the network*s criticality is reflective of the generalization performance. For example, increasing resnet depth leads to improved generalization and low criticality. Though intuitively, it is not clear why moving closer to the initial values and thus lower average criticality indicated better generalization. It will be useful ot have a discussion on this issue. Results on other datasets will also be useful. Overall, the network criticality measure appears a useful tool to predict the generalization performance compared to other measures such as distance from initialization, weight spectrum, and others.", "role": "assistant" } ]
[ "The paper builds upon the *module criticality* phenomenon and proposes a quantitative approach to measure this at the module and the network level.", "A module*s criticality is low if when it is switched to its initialization value, the error does not change drastically.", "The paper uses a convex combination of the initial weights and the final weights of a layer/module to define an optimization path to traverse.", "The authors quantitatively define the module criticality such that it depends on how much closer the weights can get to the initial weights on this path while still being robust to random permutations.", "The network critically is defined as the sum of the module criticality measure of all the layers.", "Empirical results on CIFAR10 show that the network*s criticality is reflective of the generalization performance.", "For example, increasing resnet depth leads to improved generalization and low criticality.", "Though intuitively, it is not clear why moving closer to the initial values and thus lower average criticality indicated better generalization.", "It will be useful ot have a discussion on this issue.", "Results on other datasets will also be useful.", "Overall, the network criticality measure appears a useful tool to predict the generalization performance compared to other measures such as distance from initialization, weight spectrum, and others." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 10, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 6, "suggestion_and_solution": 3, "total": 11 }
0.090909
0
0.181818
0.909091
0.181818
0
0.545455
0.272727
11
1
0
1
1
1
0
1
1
2.181818
2.118688
0.06313
The Ingredients of Real World Robotic Reinforcement Learning
The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system. Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges. We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.
The paper takes seriously the question of having a robotic system learning continuously without manual reset nor state or reward engineering. The authors propose a first approach using vison-based SAC, shown visual goals and VICE, and show that it does not provide a satisfactory solution. Then they add a random pertubation controller which brings the robot or simulated system away from the goal and a VAE to encode a compressed state, and show that it works better. The paper is a nice read, it contains useful messages thus I*m slightly in favor of accepting it, but I may easily change my mind as it suffers from serious weaknesses. First, and most importantly, the experimental study is very short, the authors have chosen to spend much more space on careful writing of the problem they are investigating. To mention a few experimental weaknesses, in Section 6.2 the authors could have performed much more detailed ablation studies and stress in more details the impact of using the VAE alone versus using the random pertubation controller alone, they could say more about the goals they show to the system, etc. There is some information in Figure 7, but this information is not exploited in a detailed way. Furthermore, Figure 7 is far to small, it is hard to say from the legend which system is which. About Fig.8, we just have a qualitative description, the authors claim that without instrumenting they cannot provide a quantitative study, which I don*t find convincing: you may instrument for the sake of science (to measure the value of what you are doing, even if the real-world system won*t use this instrumentation). So the authors have chosen to spend more space on the positionning than on the empirical study, which may speak in favor of sending this paper to a journal or magazine rather than a technical conference. But there is an issue about the positionning too: the authors fail to mention a huge body of literature trying to address very close or just similar questions. Namely, their concern is one the central leitmotives of Developmental Robotics and some of its *subfields*, such as Lifelong learning, Open-ended learning, Continual learning etc. The merit of the paper in this respect is to focus on a specific question and provide concrete results on this question, but this work should be positionned with respect to the broader approaches mentioned above. The authors will easily find plenty of references in these domains, I don*t want to give my favorite selection here. Knowing more about the literature mentioned above, the authors could reconsider their framework from a multitask learning perspective: instead of a random perturbation controller, the agent may learn various controllers to bring the system into various goal states (using e.g. goal-conditioned policies), and switching from goal to goal to prevent the system fro keeping stuck close to some goal. More local points: In the middle of page 5, it is said that the system does not learn properly just because it is stuck at the goal. This information comes late, and makes the global message weaker. in Fig. 4, I would like to know what is the threshold for success.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Ingredients of Real World Robotic Reinforcement Learning\nThe success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system. Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges. We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper takes seriously the question of having a robotic system learning continuously without manual reset nor state or reward engineering. The authors propose a first approach using vison-based SAC, shown visual goals and VICE, and show that it does not provide a satisfactory solution. Then they add a random pertubation controller which brings the robot or simulated system away from the goal and a VAE to encode a compressed state, and show that it works better. The paper is a nice read, it contains useful messages thus I*m slightly in favor of accepting it, but I may easily change my mind as it suffers from serious weaknesses. First, and most importantly, the experimental study is very short, the authors have chosen to spend much more space on careful writing of the problem they are investigating. To mention a few experimental weaknesses, in Section 6.2 the authors could have performed much more detailed ablation studies and stress in more details the impact of using the VAE alone versus using the random pertubation controller alone, they could say more about the goals they show to the system, etc. There is some information in Figure 7, but this information is not exploited in a detailed way. Furthermore, Figure 7 is far to small, it is hard to say from the legend which system is which. About Fig.8, we just have a qualitative description, the authors claim that without instrumenting they cannot provide a quantitative study, which I don*t find convincing: you may instrument for the sake of science (to measure the value of what you are doing, even if the real-world system won*t use this instrumentation). So the authors have chosen to spend more space on the positionning than on the empirical study, which may speak in favor of sending this paper to a journal or magazine rather than a technical conference. But there is an issue about the positionning too: the authors fail to mention a huge body of literature trying to address very close or just similar questions. Namely, their concern is one the central leitmotives of Developmental Robotics and some of its *subfields*, such as Lifelong learning, Open-ended learning, Continual learning etc. The merit of the paper in this respect is to focus on a specific question and provide concrete results on this question, but this work should be positionned with respect to the broader approaches mentioned above. The authors will easily find plenty of references in these domains, I don*t want to give my favorite selection here. Knowing more about the literature mentioned above, the authors could reconsider their framework from a multitask learning perspective: instead of a random perturbation controller, the agent may learn various controllers to bring the system into various goal states (using e.g. goal-conditioned policies), and switching from goal to goal to prevent the system fro keeping stuck close to some goal. More local points: In the middle of page 5, it is said that the system does not learn properly just because it is stuck at the goal. This information comes late, and makes the global message weaker. in Fig. 4, I would like to know what is the threshold for success.", "role": "assistant" } ]
[ "The paper takes seriously the question of having a robotic system learning continuously without manual reset nor state or reward engineering.", "The authors propose a first approach using vison-based SAC, shown visual goals and VICE, and show that it does not provide a satisfactory solution.", "Then they add a random pertubation controller which brings the robot or simulated system away from the goal and a VAE to encode a compressed state, and show that it works better.", "The paper is a nice read, it contains useful messages thus I*m slightly in favor of accepting it, but I may easily change my mind as it suffers from serious weaknesses.", "First, and most importantly, the experimental study is very short, the authors have chosen to spend much more space on careful writing of the problem they are investigating.", "To mention a few experimental weaknesses, in Section 6.2 the authors could have performed much more detailed ablation studies and stress in more details the impact of using the VAE alone versus using the random pertubation controller alone, they could say more about the goals they show to the system, etc.", "There is some information in Figure 7, but this information is not exploited in a detailed way.", "Furthermore, Figure 7 is far to small, it is hard to say from the legend which system is which.", "About Fig.8, we just have a qualitative description, the authors claim that without instrumenting they cannot provide a quantitative study, which I don*t find convincing: you may instrument for the sake of science (to measure the value of what you are doing, even if the real-world system won*t use this instrumentation).", "So the authors have chosen to spend more space on the positionning than on the empirical study, which may speak in favor of sending this paper to a journal or magazine rather than a technical conference.", "But there is an issue about the positionning too: the authors fail to mention a huge body of literature trying to address very close or just similar questions.", "Namely, their concern is one the central leitmotives of Developmental Robotics and some of its *subfields*, such as Lifelong learning, Open-ended learning, Continual learning etc.", "The merit of the paper in this respect is to focus on a specific question and provide concrete results on this question, but this work should be positionned with respect to the broader approaches mentioned above.", "The authors will easily find plenty of references in these domains, I don*t want to give my favorite selection here.", "Knowing more about the literature mentioned above, the authors could reconsider their framework from a multitask learning perspective: instead of a random perturbation controller, the agent may learn various controllers to bring the system into various goal states (using e.g.", "goal-conditioned policies), and switching from goal to goal to prevent the system fro keeping stuck close to some goal.", "More local points: In the middle of page 5, it is said that the system does not learn properly just because it is stuck at the goal.", "This information comes late, and makes the global message weaker.", "in Fig.", "4, I would like to know what is the threshold for success." ]
{ "criticism": 8, "example": 5, "importance_and_relevance": 4, "materials_and_methods": 10, "praise": 2, "presentation_and_reporting": 7, "results_and_discussion": 2, "suggestion_and_solution": 6, "total": 20 }
0.4
0.25
0.2
0.5
0.1
0.35
0.1
0.3
20
1
1
1
1
1
1
1
1
2.2
1.426661
0.773339
Spectral Embedding of Regularized Block Models
Spectral embedding is a popular technique for the representation of graph data. Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering. In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix. Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers. We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores.
This paper analyzes the effect of regularization on spectral embeddings in a deterministic block model and explicitly characterizes the spectra of the Laplacian of the regularized graph in terms of the regularization parameter and block sizes. To my knowledge, this has not been done before. Prior work either derives sufficient conditions for the recovery of all blocks in the asymptotic limit of an infinite number of nodes in the case of (Joseph & Yu, 2016), or lower bounds the number of small eigenvalues of the Laplacian of the unregularized graph on random graphs in expectation (therefore arguing in favor of regularization) in the case of (Zhang & Rohe, 2018). This paper, on the other hand, gives a precise characterization of the eigenvalues and eigenvectors (albeit in the case of a deterministic graph); the results are elegant and the analysis uses simple elementary techniques, which is very satisfying and seems to be easy to build on. The authors mention that they would like to extend this analysis to stochastic block models, which would indeed be interesting. The paper is also well written and the results are clearly presented. Overall, this is a nice contribution to spectral graph theory and so I recommend acceptance.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSpectral Embedding of Regularized Block Models\nSpectral embedding is a popular technique for the representation of graph data. Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering. In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix. Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers. We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper analyzes the effect of regularization on spectral embeddings in a deterministic block model and explicitly characterizes the spectra of the Laplacian of the regularized graph in terms of the regularization parameter and block sizes. To my knowledge, this has not been done before. Prior work either derives sufficient conditions for the recovery of all blocks in the asymptotic limit of an infinite number of nodes in the case of (Joseph & Yu, 2016), or lower bounds the number of small eigenvalues of the Laplacian of the unregularized graph on random graphs in expectation (therefore arguing in favor of regularization) in the case of (Zhang & Rohe, 2018). This paper, on the other hand, gives a precise characterization of the eigenvalues and eigenvectors (albeit in the case of a deterministic graph); the results are elegant and the analysis uses simple elementary techniques, which is very satisfying and seems to be easy to build on. The authors mention that they would like to extend this analysis to stochastic block models, which would indeed be interesting. The paper is also well written and the results are clearly presented. Overall, this is a nice contribution to spectral graph theory and so I recommend acceptance.", "role": "assistant" } ]
[ "This paper analyzes the effect of regularization on spectral embeddings in a deterministic block model and explicitly characterizes the spectra of the Laplacian of the regularized graph in terms of the regularization parameter and block sizes.", "To my knowledge, this has not been done before.", "Prior work either derives sufficient conditions for the recovery of all blocks in the asymptotic limit of an infinite number of nodes in the case of (Joseph & Yu, 2016), or lower bounds the number of small eigenvalues of the Laplacian of the unregularized graph on random graphs in expectation (therefore arguing in favor of regularization) in the case of (Zhang & Rohe, 2018).", "This paper, on the other hand, gives a precise characterization of the eigenvalues and eigenvectors (albeit in the case of a deterministic graph); the results are elegant and the analysis uses simple elementary techniques, which is very satisfying and seems to be easy to build on.", "The authors mention that they would like to extend this analysis to stochastic block models, which would indeed be interesting.", "The paper is also well written and the results are clearly presented.", "Overall, this is a nice contribution to spectral graph theory and so I recommend acceptance." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 2, "total": 7 }
0
0
0.285714
0.571429
0.428571
0.142857
0.428571
0.285714
7
0
0
1
1
1
1
1
1
2.142857
1.57469
0.568167
Directional Message Passing for Molecular Graphs
Graph neural networks have recently achieved great successes in predicting quantum mechanical properties of molecules. These models represent a molecule as a graph using only the distance between atoms (nodes) and not the spatial direction from one atom to another. However, directional information plays a central role in empirical potentials for molecules, e.g. in angular potentials. To alleviate this limitation we propose directional message passing, in which we embed the messages passed between atoms instead of the atoms themselves. Each message is associated with a direction in coordinate space. These directional message embeddings are rotationally equivariant since the associated directions rotate with the molecule. We propose a message passing scheme analogous to belief propagation, which uses the directional information by transforming messages based on the angle between them. Additionally, we use spherical Bessel functions to construct a theoretically well-founded, orthogonal radial basis that achieves better performance than the currently prevalent Gaussian radial basis functions while using more than 4x fewer parameters. We leverage these innovations to construct the directional message passing neural network (DimeNet). DimeNet outperforms previous GNNs on average by 77% on MD17 and by 41% on QM9.
This paper beneficially incorporates directional information into graph neural networks for molecular modeling while preserving equivariance. This paper is a tour de force of architecture engineering. Continuous equivariance, potential field representation, and bandwidth limited basis functions are synthesized in a compelling manner. I found the exposition extremely intelligible despite my lack of familiarity with molecular modeling. The contribution is clear, although applicability beyond the specific domain of molecular modeling is possibly limited. Being continuously equivariant with respect to rotations is interesting, but seems to require problems where the input is encoded as a point cloud in a vector space; I*m not familiar with such problems. Nonetheless, the domain of molecular modeling is sufficiently important in isolation. I recommend acceptance, because the contribution is strong, the writing is excellent, the ideas are well-motivated, and the experiments support the empirical claims.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDirectional Message Passing for Molecular Graphs\nGraph neural networks have recently achieved great successes in predicting quantum mechanical properties of molecules. These models represent a molecule as a graph using only the distance between atoms (nodes) and not the spatial direction from one atom to another. However, directional information plays a central role in empirical potentials for molecules, e.g. in angular potentials. To alleviate this limitation we propose directional message passing, in which we embed the messages passed between atoms instead of the atoms themselves. Each message is associated with a direction in coordinate space. These directional message embeddings are rotationally equivariant since the associated directions rotate with the molecule. We propose a message passing scheme analogous to belief propagation, which uses the directional information by transforming messages based on the angle between them. Additionally, we use spherical Bessel functions to construct a theoretically well-founded, orthogonal radial basis that achieves better performance than the currently prevalent Gaussian radial basis functions while using more than 4x fewer parameters. We leverage these innovations to construct the directional message passing neural network (DimeNet). DimeNet outperforms previous GNNs on average by 77% on MD17 and by 41% on QM9.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper beneficially incorporates directional information into graph neural networks for molecular modeling while preserving equivariance. This paper is a tour de force of architecture engineering. Continuous equivariance, potential field representation, and bandwidth limited basis functions are synthesized in a compelling manner. I found the exposition extremely intelligible despite my lack of familiarity with molecular modeling. The contribution is clear, although applicability beyond the specific domain of molecular modeling is possibly limited. Being continuously equivariant with respect to rotations is interesting, but seems to require problems where the input is encoded as a point cloud in a vector space; I*m not familiar with such problems. Nonetheless, the domain of molecular modeling is sufficiently important in isolation. I recommend acceptance, because the contribution is strong, the writing is excellent, the ideas are well-motivated, and the experiments support the empirical claims.", "role": "assistant" } ]
[ "This paper beneficially incorporates directional information into graph neural networks for molecular modeling while preserving equivariance.", "This paper is a tour de force of architecture engineering.", "Continuous equivariance, potential field representation, and bandwidth limited basis functions are synthesized in a compelling manner.", "I found the exposition extremely intelligible despite my lack of familiarity with molecular modeling.", "The contribution is clear, although applicability beyond the specific domain of molecular modeling is possibly limited.", "Being continuously equivariant with respect to rotations is interesting, but seems to require problems where the input is encoded as a point cloud in a vector space; I*m not familiar with such problems.", "Nonetheless, the domain of molecular modeling is sufficiently important in isolation.", "I recommend acceptance, because the contribution is strong, the writing is excellent, the ideas are well-motivated, and the experiments support the empirical claims." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 5, "praise": 6, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 8 }
0.25
0
0.5
0.625
0.75
0.125
0.125
0
8
1
0
1
1
1
1
1
0
2.375
1.980439
0.394561
At Stability*s Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks?
Background: Recent developments have made it possible to accelerate neural networks training significantly using large batch sizes and data parallelism. Training in an asynchronous fashion, where delay occurs, can make training even more scalable. However, asynchronous training has its pitfalls, mainly a degradation in generalization, even after convergence of the algorithm. This gap remains not well understood, as theoretical analysis so far mainly focused on the convergence rate of asynchronous methods. Contributions: We examine asynchronous training from the perspective of dynamical stability. We find that the degree of delay interacts with the learning rate, to change the set of minima accessible by an asynchronous stochastic gradient descent algorithm. We derive closed-form rules on how the learning rate could be changed, while keeping the accessible set the same. Specifically, for high delay values, we find that the learning rate should be kept inversely proportional to the delay. We then extend this analysis to include momentum. We find momentum should be either turned off, or modified to improve training stability. We provide empirical experiments to validate our theoretical findings.
The authors model A-SGD as a dynamical system, where parameters are updated with delayed gradients. The authors analyze the stability of this system, and they first derive that the learning rate must scale linearly with the inverse of the delay around a minimum to remain stable. Using a similar analysis they show that the standard way of incorporating momentum into A-SGD requires small learning rates for high momentum values, and they propose *shifted momentum,* which allows for stability under higher momentum values. Experimentally, the authors show that around minima the learning rate needed to retain stability scales linearly with the inverse of the delay, that there appears to be an analogous threshold when training models from scratch, that shifted momentum allows for higher momentum values, and finally that on several datasets A-SGD with an appropriate learning rate is able to generalize at least as well as large batch synchronous training. This is a nice paper with a large number of interesting theoretical and experimental results, and I believe it should be accepted. I think there are some largely presentational issues that should be addressed, however: - I think the authors should attempt to make a stronger case for the practical implications of their analysis: in particular, in the most practical setting (where we don*t have a minimum obtained from synchronous training), what does the provided analysis allow us to do? Part of this might involve being more explicit about the results in Table 1: what exactly was the procedure for selecting the learning rates? Is it meaningfully different than just lowering the learning rate? - Equation (3) is rather obscure without the Appendix, especially since unbolded x hasn*t been introduced anywhere. I think the authors should try to convey more of what*s going on in this equation in the main text. Minor: *looses* should be *loses* throughout, and it might be good to include a conclusion section.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAt Stability*s Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks?\nBackground: Recent developments have made it possible to accelerate neural networks training significantly using large batch sizes and data parallelism. Training in an asynchronous fashion, where delay occurs, can make training even more scalable. However, asynchronous training has its pitfalls, mainly a degradation in generalization, even after convergence of the algorithm. This gap remains not well understood, as theoretical analysis so far mainly focused on the convergence rate of asynchronous methods. Contributions: We examine asynchronous training from the perspective of dynamical stability. We find that the degree of delay interacts with the learning rate, to change the set of minima accessible by an asynchronous stochastic gradient descent algorithm. We derive closed-form rules on how the learning rate could be changed, while keeping the accessible set the same. Specifically, for high delay values, we find that the learning rate should be kept inversely proportional to the delay. We then extend this analysis to include momentum. We find momentum should be either turned off, or modified to improve training stability. We provide empirical experiments to validate our theoretical findings.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors model A-SGD as a dynamical system, where parameters are updated with delayed gradients. The authors analyze the stability of this system, and they first derive that the learning rate must scale linearly with the inverse of the delay around a minimum to remain stable. Using a similar analysis they show that the standard way of incorporating momentum into A-SGD requires small learning rates for high momentum values, and they propose *shifted momentum,* which allows for stability under higher momentum values. Experimentally, the authors show that around minima the learning rate needed to retain stability scales linearly with the inverse of the delay, that there appears to be an analogous threshold when training models from scratch, that shifted momentum allows for higher momentum values, and finally that on several datasets A-SGD with an appropriate learning rate is able to generalize at least as well as large batch synchronous training. This is a nice paper with a large number of interesting theoretical and experimental results, and I believe it should be accepted. I think there are some largely presentational issues that should be addressed, however: - I think the authors should attempt to make a stronger case for the practical implications of their analysis: in particular, in the most practical setting (where we don*t have a minimum obtained from synchronous training), what does the provided analysis allow us to do? Part of this might involve being more explicit about the results in Table 1: what exactly was the procedure for selecting the learning rates? Is it meaningfully different than just lowering the learning rate? - Equation (3) is rather obscure without the Appendix, especially since unbolded x hasn*t been introduced anywhere. I think the authors should try to convey more of what*s going on in this equation in the main text. Minor: *looses* should be *loses* throughout, and it might be good to include a conclusion section.", "role": "assistant" } ]
[ "The authors model A-SGD as a dynamical system, where parameters are updated with delayed gradients.", "The authors analyze the stability of this system, and they first derive that the learning rate must scale linearly with the inverse of the delay around a minimum to remain stable.", "Using a similar analysis they show that the standard way of incorporating momentum into A-SGD requires small learning rates for high momentum values, and they propose *shifted momentum,* which allows for stability under higher momentum values.", "Experimentally, the authors show that around minima the learning rate needed to retain stability scales linearly with the inverse of the delay, that there appears to be an analogous threshold when training models from scratch, that shifted momentum allows for higher momentum values, and finally that on several datasets A-SGD with an appropriate learning rate is able to generalize at least as well as large batch synchronous training.", "This is a nice paper with a large number of interesting theoretical and experimental results, and I believe it should be accepted.", "I think there are some largely presentational issues that should be addressed, however: - I think the authors should attempt to make a stronger case for the practical implications of their analysis: in particular, in the most practical setting (where we don*t have a minimum obtained from synchronous training), what does the provided analysis allow us to do?", "Part of this might involve being more explicit about the results in Table 1: what exactly was the procedure for selecting the learning rates?", "Is it meaningfully different than just lowering the learning rate?", "- Equation (3) is rather obscure without the Appendix, especially since unbolded x hasn*t been introduced anywhere.", "I think the authors should try to convey more of what*s going on in this equation in the main text.", "Minor: *looses* should be *loses* throughout, and it might be good to include a conclusion section." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 7, "praise": 1, "presentation_and_reporting": 4, "results_and_discussion": 6, "suggestion_and_solution": 5, "total": 11 }
0.090909
0
0.181818
0.636364
0.090909
0.363636
0.545455
0.454545
11
1
0
1
1
1
1
1
1
2.363636
2.300507
0.06313
Disentanglement through Nonlinear ICA with General Incompressible-flow Networks (GIN)
A central question of representation learning asks under which conditions it is possible to reconstruct the true latent variables of an arbitrarily complex generative process. Recent breakthrough work by Khemakhem et al. (2019) on nonlinear ICA has answered this question for a broad class of conditional generative processes. We extend this important result in a direction relevant for application to real-world data. First, we generalize the theory to the case of unknown intrinsic problem dimension and prove that in some special (but not very restrictive) cases, informative latent variables will be automatically separated from noise by an estimating model. Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation. Second, we introduce a modification of the RealNVP invertible neural network architecture (Dinh et al. (2016)) which is particularly suitable for this type of problem: the General Incompressible-flow Network (GIN). Experiments on artificial data and EMNIST demonstrate that theoretical predictions are indeed verified in practice. In particular, we provide a detailed set of exactly 22 informative latent variables extracted from EMNIST.
Based on a recent work on identifying the joint distribution over observed and latent variables, the paper presents an extension of it, where the dimension of the latent space is not assumed to be known. Moreover, the authors propose a new neural network architecture based on previous propositions (RealNVP and NICE) to perform this identification. The paper*s presentation is relatively clear, although all the theoretical results are relegated to the appendix. The extension to unknown latent space dimension seems to be quite straightforward, given the recent work that this paper is based on. However, the experimental results performed on EMNIST are quite convincing and the results are interesting. Overall, I think this paper would be interesting to the ICLR audience. It seems currently that there is a need to choose a dimension to be large enough (i.e., larger than the true dimension of the latent space). I would have appreciated some discussion (and some experiments) if this dimension is chosen too small. Also, when the dimension is chosen large enough, the non-informative learned latent variables are determined by looking at their standard deviations. Do they always have to be small compared to the informative latent variables?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDisentanglement through Nonlinear ICA with General Incompressible-flow Networks (GIN)\nA central question of representation learning asks under which conditions it is possible to reconstruct the true latent variables of an arbitrarily complex generative process. Recent breakthrough work by Khemakhem et al. (2019) on nonlinear ICA has answered this question for a broad class of conditional generative processes. We extend this important result in a direction relevant for application to real-world data. First, we generalize the theory to the case of unknown intrinsic problem dimension and prove that in some special (but not very restrictive) cases, informative latent variables will be automatically separated from noise by an estimating model. Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation. Second, we introduce a modification of the RealNVP invertible neural network architecture (Dinh et al. (2016)) which is particularly suitable for this type of problem: the General Incompressible-flow Network (GIN). Experiments on artificial data and EMNIST demonstrate that theoretical predictions are indeed verified in practice. In particular, we provide a detailed set of exactly 22 informative latent variables extracted from EMNIST.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nBased on a recent work on identifying the joint distribution over observed and latent variables, the paper presents an extension of it, where the dimension of the latent space is not assumed to be known. Moreover, the authors propose a new neural network architecture based on previous propositions (RealNVP and NICE) to perform this identification. The paper*s presentation is relatively clear, although all the theoretical results are relegated to the appendix. The extension to unknown latent space dimension seems to be quite straightforward, given the recent work that this paper is based on. However, the experimental results performed on EMNIST are quite convincing and the results are interesting. Overall, I think this paper would be interesting to the ICLR audience. It seems currently that there is a need to choose a dimension to be large enough (i.e., larger than the true dimension of the latent space). I would have appreciated some discussion (and some experiments) if this dimension is chosen too small. Also, when the dimension is chosen large enough, the non-informative learned latent variables are determined by looking at their standard deviations. Do they always have to be small compared to the informative latent variables?", "role": "assistant" } ]
[ "Based on a recent work on identifying the joint distribution over observed and latent variables, the paper presents an extension of it, where the dimension of the latent space is not assumed to be known.", "Moreover, the authors propose a new neural network architecture based on previous propositions (RealNVP and NICE) to perform this identification.", "The paper*s presentation is relatively clear, although all the theoretical results are relegated to the appendix.", "The extension to unknown latent space dimension seems to be quite straightforward, given the recent work that this paper is based on.", "However, the experimental results performed on EMNIST are quite convincing and the results are interesting.", "Overall, I think this paper would be interesting to the ICLR audience.", "It seems currently that there is a need to choose a dimension to be large enough (i.e., larger than the true dimension of the latent space).", "I would have appreciated some discussion (and some experiments) if this dimension is chosen too small.", "Also, when the dimension is chosen large enough, the non-informative learned latent variables are determined by looking at their standard deviations.", "Do they always have to be small compared to the informative latent variables?" ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 6, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 4, "suggestion_and_solution": 2, "total": 10 }
0.1
0
0.4
0.6
0.4
0.2
0.4
0.2
10
1
0
1
1
1
1
1
1
2.3
2.157958
0.142042
Improving Generalization in Meta Reinforcement Learning using Neural Objectives
Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta-reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that affects how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms human-engineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency.
The paper proposes a meta reinforcement learning algorithm called MetaGenRL, which meta-learns learning rules to generalize to different environments. The paper poses an important observation where learning rules in reinforcement learning to train the agents are results of human engineering and design, instead, the paper demonstrates how to use second-order gradients to learn learning rules to train agents. Learning learning rules in general has been proposed and this paper is another attempt to further generalize what could be learned in the learning rules. The idea is verified on three Mujoco domains, where the neural objective function is learned from one / two domains, then deployed to a new unseen domain. The experiments show that the learned neural objective can generalize to new environments which are different from the meta-training environments. Overall, the paper is a novel paper and with clear motivation, I like the paper a lot! Hope that the authors could address the following concerns and make the paper even better: 1. The current experiment setup is a great proof-of-concept, however it seems a bit limited to support the claims in the paper. The meta-training has only at most two environments and the generalization of the neural objective function is only performed at one environment. It would be great if the authors could show more results with more meta-training environments (say, 10 meta-training environments) and more meta-testing environments (the current setup is only with one); 2. The paper states a hypothesis that LSTM as a general function approximator, it is in principle able to learn variance and bias reduction techniques. However, in practice, due to learning dynamics and many other factors, it*s not necessary true, i.e., how many samples are required for an LSTM to learn such technique is unclear. At the same time, at Page 8, Section *Dependence on V* actually acts as an example of LSTM couldn*t figure out an effective variance-reduction method during the short meta-training time. The authors may want to put more words around the learnability of variance-bias trade-off techniques. Notation issues which could be further improved: 1. Page 2, *Notation* section and all of the following time indexing. Note that in Equation (1), r(s_1, a_t) has discount gamma^1, which is not true, I*d recommend the authors to follow the time indexing starting from 0, so that the Equation (1) is correct. (Alternatively, the authors could change from gamma^t into gamma^{t-1}); 2. Section *Human Engineered Gradient Estimators* is missing the formal introduction of the notation au; 3. Overall, the authors seem to use Phi and heta interchangeably, it*s better to use a unified notation across the paper; 4. In the paper, the authors choose alpha to represent the neural net for learning the objective function, to make it clearer for the readers, the authors could consider to change alpha into eta, because alpha is often considered as learning rate notation; 5. I*d suggest the authors to rewrite the paragraph in Page 3 *MetaGrenRL builds on this idea of ...., using L_alpha on the estimated return*. This describes a key step in the algorithm while at the moment it*s not very clear to the readers what*s going on there; 6. Section 3.1 is missing a step to go from Q into V; 7. The authors could consider to describe the details of the algorithms in a more general actor-critic form, instead of starting from DDPG formulation. It would make the methods more general applicable (for example, extension to discrete action space).
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nImproving Generalization in Meta Reinforcement Learning using Neural Objectives\nBiological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta-reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that affects how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms human-engineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a meta reinforcement learning algorithm called MetaGenRL, which meta-learns learning rules to generalize to different environments. The paper poses an important observation where learning rules in reinforcement learning to train the agents are results of human engineering and design, instead, the paper demonstrates how to use second-order gradients to learn learning rules to train agents. Learning learning rules in general has been proposed and this paper is another attempt to further generalize what could be learned in the learning rules. The idea is verified on three Mujoco domains, where the neural objective function is learned from one / two domains, then deployed to a new unseen domain. The experiments show that the learned neural objective can generalize to new environments which are different from the meta-training environments. Overall, the paper is a novel paper and with clear motivation, I like the paper a lot! Hope that the authors could address the following concerns and make the paper even better: 1. The current experiment setup is a great proof-of-concept, however it seems a bit limited to support the claims in the paper. The meta-training has only at most two environments and the generalization of the neural objective function is only performed at one environment. It would be great if the authors could show more results with more meta-training environments (say, 10 meta-training environments) and more meta-testing environments (the current setup is only with one); 2. The paper states a hypothesis that LSTM as a general function approximator, it is in principle able to learn variance and bias reduction techniques. However, in practice, due to learning dynamics and many other factors, it*s not necessary true, i.e., how many samples are required for an LSTM to learn such technique is unclear. At the same time, at Page 8, Section *Dependence on V* actually acts as an example of LSTM couldn*t figure out an effective variance-reduction method during the short meta-training time. The authors may want to put more words around the learnability of variance-bias trade-off techniques. Notation issues which could be further improved: 1. Page 2, *Notation* section and all of the following time indexing. Note that in Equation (1), r(s_1, a_t) has discount gamma^1, which is not true, I*d recommend the authors to follow the time indexing starting from 0, so that the Equation (1) is correct. (Alternatively, the authors could change from gamma^t into gamma^{t-1}); 2. Section *Human Engineered Gradient Estimators* is missing the formal introduction of the notation \tau; 3. Overall, the authors seem to use Phi and \theta interchangeably, it*s better to use a unified notation across the paper; 4. In the paper, the authors choose alpha to represent the neural net for learning the objective function, to make it clearer for the readers, the authors could consider to change alpha into eta, because alpha is often considered as learning rate notation; 5. I*d suggest the authors to rewrite the paragraph in Page 3 *MetaGrenRL builds on this idea of ...., using L_alpha on the estimated return*. This describes a key step in the algorithm while at the moment it*s not very clear to the readers what*s going on there; 6. Section 3.1 is missing a step to go from Q into V; 7. The authors could consider to describe the details of the algorithms in a more general actor-critic form, instead of starting from DDPG formulation. It would make the methods more general applicable (for example, extension to discrete action space).", "role": "assistant" } ]
[ "The paper proposes a meta reinforcement learning algorithm called MetaGenRL, which meta-learns learning rules to generalize to different environments.", "The paper poses an important observation where learning rules in reinforcement learning to train the agents are results of human engineering and design, instead, the paper demonstrates how to use second-order gradients to learn learning rules to train agents.", "Learning learning rules in general has been proposed and this paper is another attempt to further generalize what could be learned in the learning rules.", "The idea is verified on three Mujoco domains, where the neural objective function is learned from one / two domains, then deployed to a new unseen domain.", "The experiments show that the learned neural objective can generalize to new environments which are different from the meta-training environments.", "Overall, the paper is a novel paper and with clear motivation, I like the paper a lot!", "Hope that the authors could address the following concerns and make the paper even better: 1.", "The current experiment setup is a great proof-of-concept, however it seems a bit limited to support the claims in the paper.", "The meta-training has only at most two environments and the generalization of the neural objective function is only performed at one environment.", "It would be great if the authors could show more results with more meta-training environments (say, 10 meta-training environments) and more meta-testing environments (the current setup is only with one); 2.", "The paper states a hypothesis that LSTM as a general function approximator, it is in principle able to learn variance and bias reduction techniques.", "However, in practice, due to learning dynamics and many other factors, it*s not necessary true, i.e., how many samples are required for an LSTM to learn such technique is unclear.", "At the same time, at Page 8, Section *Dependence on V* actually acts as an example of LSTM couldn*t figure out an effective variance-reduction method during the short meta-training time.", "The authors may want to put more words around the learnability of variance-bias trade-off techniques.", "Notation issues which could be further improved: 1.", "Page 2, *Notation* section and all of the following time indexing.", "Note that in Equation (1), r(s_1, a_t) has discount gamma^1, which is not true, I*d recommend the authors to follow the time indexing starting from 0, so that the Equation (1) is correct.", "(Alternatively, the authors could change from gamma^t into gamma^{t-1}); 2.", "Section *Human Engineered Gradient Estimators* is missing the formal introduction of the notation \tau; 3.", "Overall, the authors seem to use Phi and \theta interchangeably, it*s better to use a unified notation across the paper; 4.", "In the paper, the authors choose alpha to represent the neural net for learning the objective function, to make it clearer for the readers, the authors could consider to change alpha into eta, because alpha is often considered as learning rate notation; 5.", "I*d suggest the authors to rewrite the paragraph in Page 3 *MetaGrenRL builds on this idea of ...., using L_alpha on the estimated return*.", "This describes a key step in the algorithm while at the moment it*s not very clear to the readers what*s going on there; 6.", "Section 3.1 is missing a step to go from Q into V; 7.", "The authors could consider to describe the details of the algorithms in a more general actor-critic form, instead of starting from DDPG formulation.", "It would make the methods more general applicable (for example, extension to discrete action space)." ]
{ "criticism": 6, "example": 3, "importance_and_relevance": 3, "materials_and_methods": 15, "praise": 3, "presentation_and_reporting": 9, "results_and_discussion": 6, "suggestion_and_solution": 11, "total": 26 }
0.230769
0.115385
0.115385
0.576923
0.115385
0.346154
0.230769
0.423077
26
1
1
1
1
1
1
1
1
2.153846
-0.513384
2.667231
SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models
The standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.
The authors consider the unbiased estimation of log marginal likelihood (evidence) after integration of latent variable. On top of the importance-weighted autoencoder (IWAE), which is only guaranteed to be asymptotically unbiased, the authors propose to use Russian Roulette estimator (RRE) to compensate the bias caused by the finite summation. The proposed method is interesting and can be applied in many other estimators with similar properties as Eq. (6). Bias compensation using RRE is interesting, but it seems there must be many literatures that took advantage of using RRE to improve estimators. The authors have to be thorough in presenting previous research and explaining the authors’ contribution that is distinguished from those. The authors showed synthetic and real application of the estimator, but one concern is the variance. Unbiasedness with finite samples often fails because of the variance, and regularization is often useful rather than correcting bias---If unbiasedness in important, regularization definitely breaks the unbiasedness---. The only discussion about variance is a few lines in page 3 after Eq. (7), but it is unclear how the variance problem is mitigated or why the problem does not suffer high variance. In general, this paper is well-written and dealing with important problem with interesting method. Several analysis for understanding the advantages of using the proposed method is insufficient.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models\nThe standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors consider the unbiased estimation of log marginal likelihood (evidence) after integration of latent variable. On top of the importance-weighted autoencoder (IWAE), which is only guaranteed to be asymptotically unbiased, the authors propose to use Russian Roulette estimator (RRE) to compensate the bias caused by the finite summation. The proposed method is interesting and can be applied in many other estimators with similar properties as Eq. (6). Bias compensation using RRE is interesting, but it seems there must be many literatures that took advantage of using RRE to improve estimators. The authors have to be thorough in presenting previous research and explaining the authors’ contribution that is distinguished from those. The authors showed synthetic and real application of the estimator, but one concern is the variance. Unbiasedness with finite samples often fails because of the variance, and regularization is often useful rather than correcting bias---If unbiasedness in important, regularization definitely breaks the unbiasedness---. The only discussion about variance is a few lines in page 3 after Eq. (7), but it is unclear how the variance problem is mitigated or why the problem does not suffer high variance. In general, this paper is well-written and dealing with important problem with interesting method. Several analysis for understanding the advantages of using the proposed method is insufficient.", "role": "assistant" } ]
[ "The authors consider the unbiased estimation of log marginal likelihood (evidence) after integration of latent variable.", "On top of the importance-weighted autoencoder (IWAE), which is only guaranteed to be asymptotically unbiased, the authors propose to use Russian Roulette estimator (RRE) to compensate the bias caused by the finite summation.", "The proposed method is interesting and can be applied in many other estimators with similar properties as Eq.", "(6).", "Bias compensation using RRE is interesting, but it seems there must be many literatures that took advantage of using RRE to improve estimators.", "The authors have to be thorough in presenting previous research and explaining the authors’ contribution that is distinguished from those.", "The authors showed synthetic and real application of the estimator, but one concern is the variance.", "Unbiasedness with finite samples often fails because of the variance, and regularization is often useful rather than correcting bias---If unbiasedness in important, regularization definitely breaks the unbiasedness---.", "The only discussion about variance is a few lines in page 3 after Eq.", "(7), but it is unclear how the variance problem is mitigated or why the problem does not suffer high variance.", "In general, this paper is well-written and dealing with important problem with interesting method.", "Several analysis for understanding the advantages of using the proposed method is insufficient." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 4, "materials_and_methods": 9, "praise": 4, "presentation_and_reporting": 3, "results_and_discussion": 6, "suggestion_and_solution": 2, "total": 12 }
0.166667
0.083333
0.333333
0.75
0.333333
0.25
0.5
0.166667
12
1
1
1
1
1
1
1
1
2.583333
2.567551
0.015782
Unbiased Contrastive Divergence Algorithm for Training Energy-Based Latent Variable Models
The contrastive divergence algorithm is a popular approach to training energy-based latent variable models, which has been widely used in many machine learning models such as the restricted Boltzmann machines and deep belief nets. Despite its empirical success, the contrastive divergence algorithm is also known to have biases that severely affect its convergence. In this article we propose an unbiased version of the contrastive divergence algorithm that completely removes its bias in stochastic gradient methods, based on recent advances on unbiased Markov chain Monte Carlo methods. Rigorous theoretical analysis is developed to justify the proposed algorithm, and numerical experiments show that it significantly improves the existing method. Our findings suggest that the unbiased contrastive divergence algorithm is a promising approach to training general energy-based latent variable models.
The paper introduces an efficient, unbiased contrastive divergence-like algorithm for training energy-based generative models on the example of Restricted Boltzmann Machine. The proposed algorithm is built upon a very interesting work on unbiased finite-step MCMC approximations by Jacob et al, 2017. Despite the actual theory being published some time ago, the submitted paper popularises these ideas in the machine learning community and contains optimised variants of the existing algorithms for training of RBMs. The paper is mostly written well and does a good job of introducing unbiased MCMC estimators. Authors evaluate their method on rather toyish datasets (by modern standards), however, their empirical analysis is thorough. The improvement upon the standard CD and persistent CD is clear. It also appears that the algorithm actually does not require too many steps and generally does not introduce a lot of computational overhead. The only question I have is why CD has only been tried with k=1 steps? I would be interested in its performance for different number of steps including the dynamically chosen number provided by the empirical au in UCD for a given iteration. Even though I do not expect a significant improvement to be obtained, this would separate the effect of the number of steps chosen “right” from unbiasedness of the gradient estimator. Other baselines, including those mentioned in the related work, could also make the comparison more complete. I would also suggest including https://arxiv.org/abs/1905.04062, as it seems to be relevant in the spirit.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnbiased Contrastive Divergence Algorithm for Training Energy-Based Latent Variable Models\nThe contrastive divergence algorithm is a popular approach to training energy-based latent variable models, which has been widely used in many machine learning models such as the restricted Boltzmann machines and deep belief nets. Despite its empirical success, the contrastive divergence algorithm is also known to have biases that severely affect its convergence. In this article we propose an unbiased version of the contrastive divergence algorithm that completely removes its bias in stochastic gradient methods, based on recent advances on unbiased Markov chain Monte Carlo methods. Rigorous theoretical analysis is developed to justify the proposed algorithm, and numerical experiments show that it significantly improves the existing method. Our findings suggest that the unbiased contrastive divergence algorithm is a promising approach to training general energy-based latent variable models.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper introduces an efficient, unbiased contrastive divergence-like algorithm for training energy-based generative models on the example of Restricted Boltzmann Machine. The proposed algorithm is built upon a very interesting work on unbiased finite-step MCMC approximations by Jacob et al, 2017. Despite the actual theory being published some time ago, the submitted paper popularises these ideas in the machine learning community and contains optimised variants of the existing algorithms for training of RBMs. The paper is mostly written well and does a good job of introducing unbiased MCMC estimators. Authors evaluate their method on rather toyish datasets (by modern standards), however, their empirical analysis is thorough. The improvement upon the standard CD and persistent CD is clear. It also appears that the algorithm actually does not require too many steps and generally does not introduce a lot of computational overhead. The only question I have is why CD has only been tried with k=1 steps? I would be interested in its performance for different number of steps including the dynamically chosen number provided by the empirical \tau in UCD for a given iteration. Even though I do not expect a significant improvement to be obtained, this would separate the effect of the number of steps chosen “right” from unbiasedness of the gradient estimator. Other baselines, including those mentioned in the related work, could also make the comparison more complete. I would also suggest including https://arxiv.org/abs/1905.04062, as it seems to be relevant in the spirit.", "role": "assistant" } ]
[ "The paper introduces an efficient, unbiased contrastive divergence-like algorithm for training energy-based generative models on the example of Restricted Boltzmann Machine.", "The proposed algorithm is built upon a very interesting work on unbiased finite-step MCMC approximations by Jacob et al, 2017.", "Despite the actual theory being published some time ago, the submitted paper popularises these ideas in the machine learning community and contains optimised variants of the existing algorithms for training of RBMs.", "The paper is mostly written well and does a good job of introducing unbiased MCMC estimators.", "Authors evaluate their method on rather toyish datasets (by modern standards), however, their empirical analysis is thorough.", "The improvement upon the standard CD and persistent CD is clear.", "It also appears that the algorithm actually does not require too many steps and generally does not introduce a lot of computational overhead.", "The only question I have is why CD has only been tried with k=1 steps?", "I would be interested in its performance for different number of steps including the dynamically chosen number provided by the empirical \tau in UCD for a given iteration.", "Even though I do not expect a significant improvement to be obtained, this would separate the effect of the number of steps chosen “right” from unbiasedness of the gradient estimator.", "Other baselines, including those mentioned in the related work, could also make the comparison more complete.", "I would also suggest including https://arxiv.org/abs/1905.04062, as it seems to be relevant in the spirit." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 11, "praise": 6, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 3, "total": 12 }
0.083333
0
0.166667
0.916667
0.5
0.083333
0.166667
0.25
12
1
0
1
1
1
1
1
1
2.166667
2.150884
0.015782
A Mutual Information Maximization Perspective of Language Representation Learning
We show state-of-the-art word representation learning methods maximize an objective function that is a lower bound on the mutual information between different parts of a word sequence (i.e., a sentence). Our formulation provides an alternative perspective that unifies classical word embedding models (e.g., Skip-gram) and modern contextual embeddings (e.g., BERT, XLNet). In addition to enhancing our theoretical understanding of these methods, our derivation leads to a principled framework that can be used to construct new self-supervised tasks. We provide an example by drawing inspirations from related methods based on mutual information maximization that have been successful in computer vision, and introduce a simple self-supervised objective that maximizes the mutual information between a global sentence representation and n-grams in the sentence. Our analysis offers a holistic view of representation learning methods to transfer knowledge and translate progress across multiple domains (e.g., natural language processing, computer vision, audio processing).
The paper proposes to make a clear connection between the InfoNCE learning objective (which is a lower bound of the mutual information) and multiple language models like BERT and XLN. Then based on the observation that classical LM can be seen as instances of InfoNCE, they propose a new (InfoWord) model relying on the same principles, but taking inspiration from other models also based on InfoNCE. Mainly, the proposed model differs both in the nature of the a and b variables used in InfoNCE, and also on the fact that it uses negative sampling instead of softmax. Experiments are made on two tasks and compared to a classical BERT model, and on the BERT-NCE model that is a BERT variant proposed by the authors which is somehow in-between BERT and InfoWord. They show that their approach works quite well. I have a very mitigated opinion on the paper. I) First, I really like the idea of trying to unify different models under the same learning principles, and then show that these models can be seen as specific instances of generic principles. But the way it is presented and explained lacks of clarity: for instance in Section 2, some notations are not well defined (e.g what is f?) . Moreover, the way classical models are casted under the InfoNCE principle is badly written: it assumes that readers have a very good knowledge of the models, and the paper does not show well the mapping between the loss function of each model and the InfoNCE criterion. It gives technical details that could (in my opinion) get ignored, and I would clearly prefer to catch the main differences between the different models that being flooded by technical details. So, my suggestion would be to improve the writing of this section to make the message stronger and relevant for a larger audience. II) The Infoword model can be seen as a simple instance of word masking based models, and as an extension of deep infomax for sequences (it would be certainly nice to describe a little bit what Deep InfoMax is to facilitate the reading). Here again, the article moves from technical details (e.g *hidden state of the first token (assumed to be a special start of sentence symbol *) without providing formal definitions. Having a first loss function after paragraph 4 could help to understand the principle of this model (before restricting the model to n-grams). Moreover, the equation J_DIM seems to be wrong since it contains g_omega twice while I think (but maybe I am wrong) that it has also to be defined by g_psi. J_MLM is also not clear since x_i is never defined (I assume it is x_{i:i}). At last, after unifying multiple models under one common learning objective, the authors propose to mix two different losses which is strange (the effect of the second term is slightly studied in the experimental section) without allowing us to understand why it is important to have this second loss function and why the first one is not sufficient enough. At last, I am pretty sure to not be able to reproduce the model described in the paper (adding a section on that in the supplementary material would help), and many concrete aspects are described too fast (like the way to sample negative pairs). Concerning the experimental section, experiments are convincing and show that the model is able to achieve a performance which is close to classical models. In my opinion, tis section has to be interpreted as a proof that the proposed unified vision is a good way to easily define new and efficient models. To summarize, the unification under the InfoNCE principle is interesting, but the way the paper is written makes it very difficult to follow, and the description of the proposed model is unclear (making the experiments difficult to reproduce) and lacks of a better discussion about the interest of mixing multiple loss.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Mutual Information Maximization Perspective of Language Representation Learning\nWe show state-of-the-art word representation learning methods maximize an objective function that is a lower bound on the mutual information between different parts of a word sequence (i.e., a sentence). Our formulation provides an alternative perspective that unifies classical word embedding models (e.g., Skip-gram) and modern contextual embeddings (e.g., BERT, XLNet). In addition to enhancing our theoretical understanding of these methods, our derivation leads to a principled framework that can be used to construct new self-supervised tasks. We provide an example by drawing inspirations from related methods based on mutual information maximization that have been successful in computer vision, and introduce a simple self-supervised objective that maximizes the mutual information between a global sentence representation and n-grams in the sentence. Our analysis offers a holistic view of representation learning methods to transfer knowledge and translate progress across multiple domains (e.g., natural language processing, computer vision, audio processing).", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes to make a clear connection between the InfoNCE learning objective (which is a lower bound of the mutual information) and multiple language models like BERT and XLN. Then based on the observation that classical LM can be seen as instances of InfoNCE, they propose a new (InfoWord) model relying on the same principles, but taking inspiration from other models also based on InfoNCE. Mainly, the proposed model differs both in the nature of the a and b variables used in InfoNCE, and also on the fact that it uses negative sampling instead of softmax. Experiments are made on two tasks and compared to a classical BERT model, and on the BERT-NCE model that is a BERT variant proposed by the authors which is somehow in-between BERT and InfoWord. They show that their approach works quite well. I have a very mitigated opinion on the paper. I) First, I really like the idea of trying to unify different models under the same learning principles, and then show that these models can be seen as specific instances of generic principles. But the way it is presented and explained lacks of clarity: for instance in Section 2, some notations are not well defined (e.g what is f?) . Moreover, the way classical models are casted under the InfoNCE principle is badly written: it assumes that readers have a very good knowledge of the models, and the paper does not show well the mapping between the loss function of each model and the InfoNCE criterion. It gives technical details that could (in my opinion) get ignored, and I would clearly prefer to catch the main differences between the different models that being flooded by technical details. So, my suggestion would be to improve the writing of this section to make the message stronger and relevant for a larger audience. II) The Infoword model can be seen as a simple instance of word masking based models, and as an extension of deep infomax for sequences (it would be certainly nice to describe a little bit what Deep InfoMax is to facilitate the reading). Here again, the article moves from technical details (e.g *hidden state of the first token (assumed to be a special start of sentence symbol *) without providing formal definitions. Having a first loss function after paragraph 4 could help to understand the principle of this model (before restricting the model to n-grams). Moreover, the equation J_DIM seems to be wrong since it contains g_omega twice while I think (but maybe I am wrong) that it has also to be defined by g_psi. J_MLM is also not clear since x_i is never defined (I assume it is x_{i:i}). At last, after unifying multiple models under one common learning objective, the authors propose to mix two different losses which is strange (the effect of the second term is slightly studied in the experimental section) without allowing us to understand why it is important to have this second loss function and why the first one is not sufficient enough. At last, I am pretty sure to not be able to reproduce the model described in the paper (adding a section on that in the supplementary material would help), and many concrete aspects are described too fast (like the way to sample negative pairs). Concerning the experimental section, experiments are convincing and show that the model is able to achieve a performance which is close to classical models. In my opinion, tis section has to be interpreted as a proof that the proposed unified vision is a good way to easily define new and efficient models. To summarize, the unification under the InfoNCE principle is interesting, but the way the paper is written makes it very difficult to follow, and the description of the proposed model is unclear (making the experiments difficult to reproduce) and lacks of a better discussion about the interest of mixing multiple loss.", "role": "assistant" } ]
[ "The paper proposes to make a clear connection between the InfoNCE learning objective (which is a lower bound of the mutual information) and multiple language models like BERT and XLN.", "Then based on the observation that classical LM can be seen as instances of InfoNCE, they propose a new (InfoWord) model relying on the same principles, but taking inspiration from other models also based on InfoNCE.", "Mainly, the proposed model differs both in the nature of the a and b variables used in InfoNCE, and also on the fact that it uses negative sampling instead of softmax.", "Experiments are made on two tasks and compared to a classical BERT model, and on the BERT-NCE model that is a BERT variant proposed by the authors which is somehow in-between BERT and InfoWord.", "They show that their approach works quite well.", "I have a very mitigated opinion on the paper.", "I) First, I really like the idea of trying to unify different models under the same learning principles, and then show that these models can be seen as specific instances of generic principles.", "But the way it is presented and explained lacks of clarity: for instance in Section 2, some notations are not well defined (e.g what is f?)", ".", "Moreover, the way classical models are casted under the InfoNCE principle is badly written: it assumes that readers have a very good knowledge of the models, and the paper does not show well the mapping between the loss function of each model and the InfoNCE criterion.", "It gives technical details that could (in my opinion) get ignored, and I would clearly prefer to catch the main differences between the different models that being flooded by technical details.", "So, my suggestion would be to improve the writing of this section to make the message stronger and relevant for a larger audience.", "II) The Infoword model can be seen as a simple instance of word masking based models, and as an extension of deep infomax for sequences (it would be certainly nice to describe a little bit what Deep InfoMax is to facilitate the reading).", "Here again, the article moves from technical details (e.g *hidden state of the first token (assumed to be a special start of sentence symbol *) without providing formal definitions.", "Having a first loss function after paragraph 4 could help to understand the principle of this model (before restricting the model to n-grams).", "Moreover, the equation J_DIM seems to be wrong since it contains g_omega twice while I think (but maybe I am wrong) that it has also to be defined by g_psi.", "J_MLM is also not clear since x_i is never defined (I assume it is x_{i:i}).", "At last, after unifying multiple models under one common learning objective, the authors propose to mix two different losses which is strange (the effect of the second term is slightly studied in the experimental section) without allowing us to understand why it is important to have this second loss function and why the first one is not sufficient enough.", "At last, I am pretty sure to not be able to reproduce the model described in the paper (adding a section on that in the supplementary material would help), and many concrete aspects are described too fast (like the way to sample negative pairs).", "Concerning the experimental section, experiments are convincing and show that the model is able to achieve a performance which is close to classical models.", "In my opinion, tis section has to be interpreted as a proof that the proposed unified vision is a good way to easily define new and efficient models.", "To summarize, the unification under the InfoNCE principle is interesting, but the way the paper is written makes it very difficult to follow, and the description of the proposed model is unclear (making the experiments difficult to reproduce) and lacks of a better discussion about the interest of mixing multiple loss." ]
{ "criticism": 8, "example": 2, "importance_and_relevance": 4, "materials_and_methods": 14, "praise": 5, "presentation_and_reporting": 10, "results_and_discussion": 2, "suggestion_and_solution": 5, "total": 22 }
0.363636
0.090909
0.181818
0.636364
0.227273
0.454545
0.090909
0.227273
22
1
1
1
1
1
1
1
1
2.272727
0.99435
1.278377
Reconstructing continuous distributions of 3D protein structure from cryo-EM images
Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the structure of proteins and other macromolecular complexes at near-atomic resolution. In single particle cryo-EM, the central problem is to reconstruct the 3D structure of a macromolecule from noisy and randomly oriented 2D projection images. However, the imaged protein complexes may exhibit structural variability, which complicates reconstruction and is typically addressed using discrete clustering approaches that fail to capture the full range of protein dynamics. Here, we introduce a novel method for cryo-EM reconstruction that extends naturally to modeling continuous generative factors of structural heterogeneity. This method encodes structures in Fourier space using coordinate-based deep neural networks, and trains these networks from unlabeled 2D cryo-EM images by combining exact inference over image orientation with variational inference for structural heterogeneity. We demonstrate that the proposed method, termed cryoDRGN, can perform ab-initio reconstruction of 3D protein complexes from simulated and real 2D cryo-EM image data. To our knowledge, cryoDRGN is the first neural network-based approach for cryo-EM reconstruction and the first end-to-end method for directly reconstructing continuous ensembles of protein structures from cryo-EM images.
The authors introduce cryoDRGN, a VAE neural network architecture to reconstruct 3D protein structure from 2D cryo-EM images. The paper offers for a good read and diagrams are informative. Below are comments for improvement and clarification. > Consider explaining cryoSPARC in detail given that is the state-of-the-art technique and to which all the cryoDGRN results are compared. > In Figure 4 and the related experiment, how are a) the cryoSPARK volumes related to cryoDRGN volumes, b) what do the clusters mean in cryoSPARK and how do they compare with the corresponding outputs of cryoDRGN > What would runtime comparisons be for cryoSPARK and cryoDGRN, for an unsupervised heteregeneous reconstruction?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nReconstructing continuous distributions of 3D protein structure from cryo-EM images\nCryo-electron microscopy (cryo-EM) is a powerful technique for determining the structure of proteins and other macromolecular complexes at near-atomic resolution. In single particle cryo-EM, the central problem is to reconstruct the 3D structure of a macromolecule from noisy and randomly oriented 2D projection images. However, the imaged protein complexes may exhibit structural variability, which complicates reconstruction and is typically addressed using discrete clustering approaches that fail to capture the full range of protein dynamics. Here, we introduce a novel method for cryo-EM reconstruction that extends naturally to modeling continuous generative factors of structural heterogeneity. This method encodes structures in Fourier space using coordinate-based deep neural networks, and trains these networks from unlabeled 2D cryo-EM images by combining exact inference over image orientation with variational inference for structural heterogeneity. We demonstrate that the proposed method, termed cryoDRGN, can perform ab-initio reconstruction of 3D protein complexes from simulated and real 2D cryo-EM image data. To our knowledge, cryoDRGN is the first neural network-based approach for cryo-EM reconstruction and the first end-to-end method for directly reconstructing continuous ensembles of protein structures from cryo-EM images.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors introduce cryoDRGN, a VAE neural network architecture to reconstruct 3D protein structure from 2D cryo-EM images. The paper offers for a good read and diagrams are informative. Below are comments for improvement and clarification. > Consider explaining cryoSPARC in detail given that is the state-of-the-art technique and to which all the cryoDGRN results are compared. > In Figure 4 and the related experiment, how are a) the cryoSPARK volumes related to cryoDRGN volumes, b) what do the clusters mean in cryoSPARK and how do they compare with the corresponding outputs of cryoDRGN > What would runtime comparisons be for cryoSPARK and cryoDGRN, for an unsupervised heteregeneous reconstruction?", "role": "assistant" } ]
[ "The authors introduce cryoDRGN, a VAE neural network architecture to reconstruct 3D protein structure from 2D cryo-EM images.", "The paper offers for a good read and diagrams are informative.", "Below are comments for improvement and clarification.", "> Consider explaining cryoSPARC in detail given that is the state-of-the-art technique and to which all the cryoDGRN results are compared.", "> In Figure 4 and the related experiment, how are a) the cryoSPARK volumes related to cryoDRGN volumes, b) what do the clusters mean in cryoSPARK and how do they compare with the corresponding outputs of cryoDRGN > What would runtime comparisons be for cryoSPARK and cryoDGRN, for an unsupervised heteregeneous reconstruction?" ]
{ "criticism": 0, "example": 1, "importance_and_relevance": 0, "materials_and_methods": 3, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 5 }
0
0.2
0
0.6
0.2
0.6
0.4
0.4
5
0
1
0
1
1
1
1
1
2.4
1.389924
1.010076
AN EXPONENTIAL LEARNING RATE SCHEDULE FOR BATCH NORMALIZED NETWORKS
Intriguing empirical evidence exists that deep learning can work well with exotic schedules for varying the learning rate. This paper suggests that the phenomenon may be due to Batch Normalization or BN(Ioffe & Szegedy, 2015), which is ubiq- uitous and provides benefits in optimization and generalization across all standard architectures. The following new results are shown about BN with weight decay and momentum (in other words, the typical use case which was not considered in earlier theoretical analyses of stand-alone BN (Ioffe & Szegedy, 2015; Santurkar et al., 2018; Arora et al., 2018) • Training can be done using SGD with momentum and an exponentially in- creasing learning rate schedule, i.e., learning rate increases by some (1 + ?) factor in every epoch for some ? > 0. (Precise statement in the paper.) To the best of our knowledge this is the first time such a rate schedule has been successfully used, let alone for highly successful architectures. As ex- pected, such training rapidly blows up network weights, but the net stays well-behaved due to normalization. • Mathematical explanation of the success of the above rate schedule: a rigor- ous proof that it is equivalent to the standard setting of BN + SGD + Standard Rate Tuning + Weight Decay + Momentum. This equivalence holds for other normalization layers as well, Group Normalization(Wu & He, 2018), Layer Normalization(Ba et al., 2016), Instance Norm(Ulyanov et al., 2016), etc. • A worked-out toy example illustrating the above linkage of hyper- parameters. Using either weight decay or BN alone reaches global minimum, but convergence fails when both are used.
%%% Update to the review %%% Thanks for your clarification and the revision - the paper looks good! With regards to your comment on accelerating hyper-parameter search, note that there are fairly subtle issues owing to the use of SGD - refer to a recent work of Ge et al *The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares* (2019). %%% The paper makes an interesting observation connecting the use of weight decay + normalization to training the same network (without regularization) using an exponentially increasing learning rate schedule, under an assumption of scale invariance that is satisfied by normalization techniques including batch norm, layer norm and other variants. An example is also provided where the joint use of batch norm and weight decay can lead to non-convergence to a global minimum whereas the use of one (without the other) converges to a global minimum - serving to indicate various interdependencies between the hyper-parameters that one needs to be careful about. While the connection of scale invariant models to novel schemes of learning rates is interesting (and novel), the paper will benefit quite a bit in its contributions through attempting a convergence analysis towards a stationary point even for solving a routine smooth non-convex stochastic optimization problem. Owing to the equivalence described in the paper, this enables us to understand the behavior of the combination of batch norm (or some scale invariance property) + weight decay + momentum (+ step decay of the learning rate), which, to my knowledge isn’t present in the literature of non-convex optimization. The paper is reasonably written (the proof of the main claim is fairly easy to follow), but needs to be carefully read through because I see typos and ill-formed sentences that should be rectified - e.g. see point 3. in appendix A.1 - some facts about equation 4, missing citation in definition 1.2 amongst others. I went over the proof of the main result and this appears to be correct. Furthermore, I find the connections to other learning rates (such as the cosine learning rate) to be rather hard to understand/interpret, in the current shape of the paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAN EXPONENTIAL LEARNING RATE SCHEDULE FOR BATCH NORMALIZED NETWORKS\nIntriguing empirical evidence exists that deep learning can work well with exotic schedules for varying the learning rate. This paper suggests that the phenomenon may be due to Batch Normalization or BN(Ioffe & Szegedy, 2015), which is ubiq- uitous and provides benefits in optimization and generalization across all standard architectures. The following new results are shown about BN with weight decay and momentum (in other words, the typical use case which was not considered in earlier theoretical analyses of stand-alone BN (Ioffe & Szegedy, 2015; Santurkar et al., 2018; Arora et al., 2018) • Training can be done using SGD with momentum and an exponentially in- creasing learning rate schedule, i.e., learning rate increases by some (1 + ?) factor in every epoch for some ? > 0. (Precise statement in the paper.) To the best of our knowledge this is the first time such a rate schedule has been successfully used, let alone for highly successful architectures. As ex- pected, such training rapidly blows up network weights, but the net stays well-behaved due to normalization. • Mathematical explanation of the success of the above rate schedule: a rigor- ous proof that it is equivalent to the standard setting of BN + SGD + Standard Rate Tuning + Weight Decay + Momentum. This equivalence holds for other normalization layers as well, Group Normalization(Wu & He, 2018), Layer Normalization(Ba et al., 2016), Instance Norm(Ulyanov et al., 2016), etc. • A worked-out toy example illustrating the above linkage of hyper- parameters. Using either weight decay or BN alone reaches global minimum, but convergence fails when both are used.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n%%% Update to the review %%% Thanks for your clarification and the revision - the paper looks good! With regards to your comment on accelerating hyper-parameter search, note that there are fairly subtle issues owing to the use of SGD - refer to a recent work of Ge et al *The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares* (2019). %%% The paper makes an interesting observation connecting the use of weight decay + normalization to training the same network (without regularization) using an exponentially increasing learning rate schedule, under an assumption of scale invariance that is satisfied by normalization techniques including batch norm, layer norm and other variants. An example is also provided where the joint use of batch norm and weight decay can lead to non-convergence to a global minimum whereas the use of one (without the other) converges to a global minimum - serving to indicate various interdependencies between the hyper-parameters that one needs to be careful about. While the connection of scale invariant models to novel schemes of learning rates is interesting (and novel), the paper will benefit quite a bit in its contributions through attempting a convergence analysis towards a stationary point even for solving a routine smooth non-convex stochastic optimization problem. Owing to the equivalence described in the paper, this enables us to understand the behavior of the combination of batch norm (or some scale invariance property) + weight decay + momentum (+ step decay of the learning rate), which, to my knowledge isn’t present in the literature of non-convex optimization. The paper is reasonably written (the proof of the main claim is fairly easy to follow), but needs to be carefully read through because I see typos and ill-formed sentences that should be rectified - e.g. see point 3. in appendix A.1 - some facts about equation 4, missing citation in definition 1.2 amongst others. I went over the proof of the main result and this appears to be correct. Furthermore, I find the connections to other learning rates (such as the cosine learning rate) to be rather hard to understand/interpret, in the current shape of the paper.", "role": "assistant" } ]
[ "%%% Update to the review %%% Thanks for your clarification and the revision - the paper looks good!", "With regards to your comment on accelerating hyper-parameter search, note that there are fairly subtle issues owing to the use of SGD - refer to a recent work of Ge et al *The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares* (2019).", "%%% The paper makes an interesting observation connecting the use of weight decay + normalization to training the same network (without regularization) using an exponentially increasing learning rate schedule, under an assumption of scale invariance that is satisfied by normalization techniques including batch norm, layer norm and other variants.", "An example is also provided where the joint use of batch norm and weight decay can lead to non-convergence to a global minimum whereas the use of one (without the other) converges to a global minimum - serving to indicate various interdependencies between the hyper-parameters that one needs to be careful about.", "While the connection of scale invariant models to novel schemes of learning rates is interesting (and novel), the paper will benefit quite a bit in its contributions through attempting a convergence analysis towards a stationary point even for solving a routine smooth non-convex stochastic optimization problem.", "Owing to the equivalence described in the paper, this enables us to understand the behavior of the combination of batch norm (or some scale invariance property) + weight decay + momentum (+ step decay of the learning rate), which, to my knowledge isn’t present in the literature of non-convex optimization.", "The paper is reasonably written (the proof of the main claim is fairly easy to follow), but needs to be carefully read through because I see typos and ill-formed sentences that should be rectified - e.g.", "see point 3. in appendix A.1 - some facts about equation 4, missing citation in definition 1.2 amongst others.", "I went over the proof of the main result and this appears to be correct.", "Furthermore, I find the connections to other learning rates (such as the cosine learning rate) to be rather hard to understand/interpret, in the current shape of the paper." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 4, "presentation_and_reporting": 4, "results_and_discussion": 5, "suggestion_and_solution": 3, "total": 10 }
0.3
0
0.2
0.5
0.4
0.4
0.5
0.3
10
1
0
1
1
1
1
1
1
2.6
2.457958
0.142042
BackPACK: Packing more into Backprop
Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient. Yet, other quantities such as the variance of the mini-batch gradients or many approximations to the Hessian can, in theory, be computed efficiently, and at the same time as the gradient. While these quantities are of great interest to researchers and practitioners, current deep learning software does not support their automatic calculation. Manually implementing them is burdensome, inefficient if done naively, and the resulting code is rarely shared. This hampers progress in deep learning, and unnecessarily narrows research to focus on gradient descent and its variants; it also complicates replication studies and comparisons between newly developed methods that require those quantities, to the point of impossibility. To address this problem, we introduce BackPACK, an efficient framework built on top of PyTorch, that extends the backpropagation algorithm to extract additional information from first-and second-order derivatives. Its capabilities are illustrated by benchmark reports for computing additional quantities on deep neural networks, and an example application by testing several recent curvature approximations for optimization.
This paper adds a very interesting and useful feature to existing autodifferentiation for training neural networks. The second-order information can be backprogated just as the first-order ones, which can be used to accelerate training. This idea, although according to the paper, is developed upon existing works, still, strongly attracts as the second-order information is crucial for training and perhaps visualizing the landscape of neural networks. I vote for an acceptance as this brings a significantly important feature to PyTorch, and the author*s good experiments results and open-sourced code.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBackPACK: Packing more into Backprop\nAutomatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient. Yet, other quantities such as the variance of the mini-batch gradients or many approximations to the Hessian can, in theory, be computed efficiently, and at the same time as the gradient. While these quantities are of great interest to researchers and practitioners, current deep learning software does not support their automatic calculation. Manually implementing them is burdensome, inefficient if done naively, and the resulting code is rarely shared. This hampers progress in deep learning, and unnecessarily narrows research to focus on gradient descent and its variants; it also complicates replication studies and comparisons between newly developed methods that require those quantities, to the point of impossibility. To address this problem, we introduce BackPACK, an efficient framework built on top of PyTorch, that extends the backpropagation algorithm to extract additional information from first-and second-order derivatives. Its capabilities are illustrated by benchmark reports for computing additional quantities on deep neural networks, and an example application by testing several recent curvature approximations for optimization.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper adds a very interesting and useful feature to existing autodifferentiation for training neural networks. The second-order information can be backprogated just as the first-order ones, which can be used to accelerate training. This idea, although according to the paper, is developed upon existing works, still, strongly attracts as the second-order information is crucial for training and perhaps visualizing the landscape of neural networks. I vote for an acceptance as this brings a significantly important feature to PyTorch, and the author*s good experiments results and open-sourced code.", "role": "assistant" } ]
[ "This paper adds a very interesting and useful feature to existing autodifferentiation for training neural networks.", "The second-order information can be backprogated just as the first-order ones, which can be used to accelerate training.", "This idea, although according to the paper, is developed upon existing works, still, strongly attracts as the second-order information is crucial for training and perhaps visualizing the landscape of neural networks.", "I vote for an acceptance as this brings a significantly important feature to PyTorch, and the author*s good experiments results and open-sourced code." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 2, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 4 }
0
0
0.75
0.5
0.5
0.25
0.25
0.25
4
0
0
1
1
1
1
1
1
2.5
1.221623
1.278377
Principled Weight Initialization for Hypernetworks
Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner. Despite extensive applications ranging from multi-task learning to Bayesian deep learning, the problem of optimizing hypernetworks has not been studied to date. We observe that classical weight initialization methods like Glorot & Bengio (2010) and He et al. (2015), when applied directly on a hypernet, fail to produce weights for the mainnet in the correct scale. We develop principled techniques for weight initialization in hypernets, and show that they lead to more stable mainnet weights, lower training loss, and faster convergence.
Principled Weight Initialization for Hypernetworks Summary: This paper investigates initialization schemes for hypernets, models where the weights are generated by another neural network, which takes as input either a learned embedding or the activations of the model itself. Standard initialization schemes for deep networks (Glorot, He, Fixup) are based on variance analyses that aim to keep the scale of activations/gradients well-behaved through forward-backward propagation, but using this approach is ill-founded for hypernets where the output is a set of weights rather than e.g. a softmax’d classification output. This paper extends the standard variance analysis to consider the hypernet case by investigating what the choice of hypernet initialization should be if one still wishes to maintain the well-behaved activations/gradients in the main model. The authors present results showing the evolution of hypernet outputs with their scheme are better-behaved, demonstrate that their modification leads to improved stability for hypernet-based convnets on CIFAR (over a model which, for the standard choice of He init, basically does not train) and improved performance on a continual learning task. My take: This is an *aha!* or an “obvious in retrospect” paper: a simple idea based on noticing something being done wrong in practice with a fairly straightforward fix, coupled with a decent empirical evaluation and analysis. The paper is well-written and reasonably easy to follow (although I am not familiar with Ricci calculus I did not feel that I was flummoxed at any point during the maths), and the potential impact is decent: any future work which employs a hypernetwork would likely do well to consider the methods in this paper. While I would like to see the empirical evaluations taken a bit further, I think this paper is a solid accept (7/10) and would make a good addition to ICLR2020. Notes: While this is a good paper, I think the impact of the paper could be magnified if the authors were a bit more ambitious with their empirical evaluations. This method seems to enable training hypernets in settings which would previously have been unstable; it would be good to more thoroughly characterize robustness to hyperparameters or otherwise demonstrate additional practical value. Showing results on ImageNet would also be helpful (I’m well aware this is not always in the realm of compute possibility) or just showing progress on a more challenging task outside of CIFAR or a small continual learning problem would, I think, greatly increase the chances that this paper catches on. As this is a “unleash your potential” note, I have not taken this sentiment into account in my review (as should hopefully be evident from my accept score). Minor: -Caption for figure 2 should indicate what kind of magnitudes are represented—are these the average weight norms in each layer vs epochs? The axes should be labeled.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPrincipled Weight Initialization for Hypernetworks\nHypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner. Despite extensive applications ranging from multi-task learning to Bayesian deep learning, the problem of optimizing hypernetworks has not been studied to date. We observe that classical weight initialization methods like Glorot & Bengio (2010) and He et al. (2015), when applied directly on a hypernet, fail to produce weights for the mainnet in the correct scale. We develop principled techniques for weight initialization in hypernets, and show that they lead to more stable mainnet weights, lower training loss, and faster convergence.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nPrincipled Weight Initialization for Hypernetworks Summary: This paper investigates initialization schemes for hypernets, models where the weights are generated by another neural network, which takes as input either a learned embedding or the activations of the model itself. Standard initialization schemes for deep networks (Glorot, He, Fixup) are based on variance analyses that aim to keep the scale of activations/gradients well-behaved through forward-backward propagation, but using this approach is ill-founded for hypernets where the output is a set of weights rather than e.g. a softmax’d classification output. This paper extends the standard variance analysis to consider the hypernet case by investigating what the choice of hypernet initialization should be if one still wishes to maintain the well-behaved activations/gradients in the main model. The authors present results showing the evolution of hypernet outputs with their scheme are better-behaved, demonstrate that their modification leads to improved stability for hypernet-based convnets on CIFAR (over a model which, for the standard choice of He init, basically does not train) and improved performance on a continual learning task. My take: This is an *aha!* or an “obvious in retrospect” paper: a simple idea based on noticing something being done wrong in practice with a fairly straightforward fix, coupled with a decent empirical evaluation and analysis. The paper is well-written and reasonably easy to follow (although I am not familiar with Ricci calculus I did not feel that I was flummoxed at any point during the maths), and the potential impact is decent: any future work which employs a hypernetwork would likely do well to consider the methods in this paper. While I would like to see the empirical evaluations taken a bit further, I think this paper is a solid accept (7/10) and would make a good addition to ICLR2020. Notes: While this is a good paper, I think the impact of the paper could be magnified if the authors were a bit more ambitious with their empirical evaluations. This method seems to enable training hypernets in settings which would previously have been unstable; it would be good to more thoroughly characterize robustness to hyperparameters or otherwise demonstrate additional practical value. Showing results on ImageNet would also be helpful (I’m well aware this is not always in the realm of compute possibility) or just showing progress on a more challenging task outside of CIFAR or a small continual learning problem would, I think, greatly increase the chances that this paper catches on. As this is a “unleash your potential” note, I have not taken this sentiment into account in my review (as should hopefully be evident from my accept score). Minor: -Caption for figure 2 should indicate what kind of magnitudes are represented—are these the average weight norms in each layer vs epochs? The axes should be labeled.", "role": "assistant" } ]
[ "Principled Weight Initialization for Hypernetworks Summary: This paper investigates initialization schemes for hypernets, models where the weights are generated by another neural network, which takes as input either a learned embedding or the activations of the model itself.", "Standard initialization schemes for deep networks (Glorot, He, Fixup) are based on variance analyses that aim to keep the scale of activations/gradients well-behaved through forward-backward propagation, but using this approach is ill-founded for hypernets where the output is a set of weights rather than e.g.", "a softmax’d classification output.", "This paper extends the standard variance analysis to consider the hypernet case by investigating what the choice of hypernet initialization should be if one still wishes to maintain the well-behaved activations/gradients in the main model.", "The authors present results showing the evolution of hypernet outputs with their scheme are better-behaved, demonstrate that their modification leads to improved stability for hypernet-based convnets on CIFAR (over a model which, for the standard choice of He init, basically does not train) and improved performance on a continual learning task.", "My take: This is an *aha!", "* or an “obvious in retrospect” paper: a simple idea based on noticing something being done wrong in practice with a fairly straightforward fix, coupled with a decent empirical evaluation and analysis.", "The paper is well-written and reasonably easy to follow (although I am not familiar with Ricci calculus I did not feel that I was flummoxed at any point during the maths), and the potential impact is decent: any future work which employs a hypernetwork would likely do well to consider the methods in this paper.", "While I would like to see the empirical evaluations taken a bit further, I think this paper is a solid accept (7/10) and would make a good addition to ICLR2020.", "Notes: While this is a good paper, I think the impact of the paper could be magnified if the authors were a bit more ambitious with their empirical evaluations.", "This method seems to enable training hypernets in settings which would previously have been unstable; it would be good to more thoroughly characterize robustness to hyperparameters or otherwise demonstrate additional practical value.", "Showing results on ImageNet would also be helpful (I’m well aware this is not always in the realm of compute possibility) or just showing progress on a more challenging task outside of CIFAR or a small continual learning problem would, I think, greatly increase the chances that this paper catches on.", "As this is a “unleash your potential” note, I have not taken this sentiment into account in my review (as should hopefully be evident from my accept score).", "Minor: -Caption for figure 2 should indicate what kind of magnitudes are represented—are these the average weight norms in each layer vs epochs?", "The axes should be labeled." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 0, "materials_and_methods": 12, "praise": 4, "presentation_and_reporting": 3, "results_and_discussion": 5, "suggestion_and_solution": 6, "total": 15 }
0.133333
0.066667
0
0.8
0.266667
0.2
0.333333
0.4
15
1
1
0
1
1
1
1
1
2.2
2.13687
0.06313
On the Convergence of FedAvg on Non-IID Data
Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging ( exttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of exttt{FedAvg} on non-iid data and establish a convergence rate of for strongly convex and smooth problems, where is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for exttt{FedAvg} on non-iid data: the learning rate must decay, even if full-gradient is used; otherwise, the solution will be away from the optimal.
This paper presents convergence rates for straggler-aware averaged SGD for non-identically but independent distributed data. The paper is well-written and motivated with good discussions of the algorithm and the related works. The proof techniques involve bounding how much worse can the algorithm do because of non-identical distribution and introduction of stragglers into the standard analysis of SGD-like algorithms. The presented theory is useful, and also provides new insights such as a new sampling scheme and an inherent bias for the case of non-decaying step size. The empirical evaluation is adequate and well-presented. I think this paper is a strong contribution and should spark further discussions in the community.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the Convergence of FedAvg on Non-IID Data\nFederated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (\texttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of for strongly convex and smooth problems, where is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for \texttt{FedAvg} on non-iid data: the learning rate must decay, even if full-gradient is used; otherwise, the solution will be away from the optimal.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents convergence rates for straggler-aware averaged SGD for non-identically but independent distributed data. The paper is well-written and motivated with good discussions of the algorithm and the related works. The proof techniques involve bounding how much worse can the algorithm do because of non-identical distribution and introduction of stragglers into the standard analysis of SGD-like algorithms. The presented theory is useful, and also provides new insights such as a new sampling scheme and an inherent bias for the case of non-decaying step size. The empirical evaluation is adequate and well-presented. I think this paper is a strong contribution and should spark further discussions in the community.", "role": "assistant" } ]
[ "This paper presents convergence rates for straggler-aware averaged SGD for non-identically but independent distributed data.", "The paper is well-written and motivated with good discussions of the algorithm and the related works.", "The proof techniques involve bounding how much worse can the algorithm do because of non-identical distribution and introduction of stragglers into the standard analysis of SGD-like algorithms.", "The presented theory is useful, and also provides new insights such as a new sampling scheme and an inherent bias for the case of non-decaying step size.", "The empirical evaluation is adequate and well-presented.", "I think this paper is a strong contribution and should spark further discussions in the community." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 6 }
0
0
0.333333
0.833333
0.666667
0.333333
0.5
0.166667
6
0
0
1
1
1
1
1
1
2.833333
2.059994
0.773339
Contrastive Learning of Structured World Models
A structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition. Learning such a structured world model from raw sensory data remains a challenge. As a step towards this goal, we introduce Contrastively-trained Structured World Models (C-SWMs). C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure. We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network. This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process. We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation. Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.
The construction and learning of structured world models is an interesting area of research that could in principle enable better generalisation and interpretability for predictive models. The authors overcome the problem of using pixel-based losses (a common issue being reconstruction of small but potentially important objects) by using a contrastive latent space. The model otherwise makes use of a fixed number of object slots and a GNN transition model, similarly to prior approaches. The authors back up their method with nice results on 3D cubes and 3-body physics domains, and reasonable initial results on two Atari games, with ablations on the different components showing their contributions, so I would give this paper an accept. The comparisons to existing literature and related areas is very extensive, with interesting pointers to potential future work - particularly on the transition model and graph embeddings. As expected, the object-factorized action space appears to work well for generalisation, and could be extended/adapted, but setting a fixed number of objects K is a clearly fundamentally limiting hyperparameter, and so showing how the model performs under misspecification of this hyperparameter is useful to know for settings where this is known (2D shapes, 3D blocks, 3-body physics). The fact that K=1 is the best for Pong but K=5 is the best for Space Invaders raises at least two questions: can scaling K > 5 further improve performance on Space Invaders, and is it possible to make the model more robust to a greater-than-needed number of object slots? On a similar note, the data collection procedure for the Atari games seems to indicate that the model is quite sensitive to domains where actions rarely have an impact on the transition dynamics, or the interaction is more complex (e.g. other agents exist in the world) - coming up with a synthetic dataset where the importance of this can be quantified would again aid understanding of the authors* proposed method.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nContrastive Learning of Structured World Models\nA structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition. Learning such a structured world model from raw sensory data remains a challenge. As a step towards this goal, we introduce Contrastively-trained Structured World Models (C-SWMs). C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure. We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network. This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process. We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation. Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe construction and learning of structured world models is an interesting area of research that could in principle enable better generalisation and interpretability for predictive models. The authors overcome the problem of using pixel-based losses (a common issue being reconstruction of small but potentially important objects) by using a contrastive latent space. The model otherwise makes use of a fixed number of object slots and a GNN transition model, similarly to prior approaches. The authors back up their method with nice results on 3D cubes and 3-body physics domains, and reasonable initial results on two Atari games, with ablations on the different components showing their contributions, so I would give this paper an accept. The comparisons to existing literature and related areas is very extensive, with interesting pointers to potential future work - particularly on the transition model and graph embeddings. As expected, the object-factorized action space appears to work well for generalisation, and could be extended/adapted, but setting a fixed number of objects K is a clearly fundamentally limiting hyperparameter, and so showing how the model performs under misspecification of this hyperparameter is useful to know for settings where this is known (2D shapes, 3D blocks, 3-body physics). The fact that K=1 is the best for Pong but K=5 is the best for Space Invaders raises at least two questions: can scaling K > 5 further improve performance on Space Invaders, and is it possible to make the model more robust to a greater-than-needed number of object slots? On a similar note, the data collection procedure for the Atari games seems to indicate that the model is quite sensitive to domains where actions rarely have an impact on the transition dynamics, or the interaction is more complex (e.g. other agents exist in the world) - coming up with a synthetic dataset where the importance of this can be quantified would again aid understanding of the authors* proposed method.", "role": "assistant" } ]
[ "The construction and learning of structured world models is an interesting area of research that could in principle enable better generalisation and interpretability for predictive models.", "The authors overcome the problem of using pixel-based losses (a common issue being reconstruction of small but potentially important objects) by using a contrastive latent space.", "The model otherwise makes use of a fixed number of object slots and a GNN transition model, similarly to prior approaches.", "The authors back up their method with nice results on 3D cubes and 3-body physics domains, and reasonable initial results on two Atari games, with ablations on the different components showing their contributions, so I would give this paper an accept.", "The comparisons to existing literature and related areas is very extensive, with interesting pointers to potential future work - particularly on the transition model and graph embeddings.", "As expected, the object-factorized action space appears to work well for generalisation, and could be extended/adapted, but setting a fixed number of objects K is a clearly fundamentally limiting hyperparameter, and so showing how the model performs under misspecification of this hyperparameter is useful to know for settings where this is known (2D shapes, 3D blocks, 3-body physics).", "The fact that K=1 is the best for Pong but K=5 is the best for Space Invaders raises at least two questions: can scaling K > 5 further improve performance on Space Invaders, and is it possible to make the model more robust to a greater-than-needed number of object slots?", "On a similar note, the data collection procedure for the Atari games seems to indicate that the model is quite sensitive to domains where actions rarely have an impact on the transition dynamics, or the interaction is more complex (e.g.", "other agents exist in the world) - coming up with a synthetic dataset where the importance of this can be quantified would again aid understanding of the authors* proposed method." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 9, "praise": 4, "presentation_and_reporting": 0, "results_and_discussion": 2, "suggestion_and_solution": 4, "total": 9 }
0
0
0.333333
1
0.444444
0
0.222222
0.444444
9
0
0
1
1
1
0
1
1
2.444444
2.191926
0.252519
Optimal Strategies Against Generative Attacks
Generative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a ``fake reality**? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches, and show that they result in models that are more robust to various attacks on real-world data. Code will be made publicly available upon publication.
This paper addresses the issue of malicious use of generative models to fool authentication/anomaly detection systems that rely on sensor data. The authors formulate the scenario as a maxmin game between an authenticator and an attacker, with limitations on the number of samples available to the authenticator to fix a decision rule, the number of samples required at test time for the authenticator to take a decision and the number of leaked samples the attacker has access to. The authors prove that the game admits a Nash equilibrium and derive a closed form solution for the case of multivariate Gaussian data. Finally, the authors propose an algorithm called *GAN In the Middle* and perform experiments to show consistency with the theoretical results, better authentication performance than state of the art methods and usability for data augmentation. This pager should be accepted. Overall, it addresses crucial problems with the recent advances of generative models and provides significant theoretical results. The experiments would benefit from some clarification. For the experiments, the following should be addressed: * Confidence intervals for all results (specifically in Figure 1a and in Table 1) * Figure 1a: it would be interesting to see a similar analysis also for other values of m and k. * Table 1: This result would also be more supporting if experiments were performed for varying values of m, n and k. The description of the RS attack could be made more precise: does it mean that the attacker samples images at random? Intuitively, it feels confusing that the GIM authenticator would perform worse on this setting. * The experiments on handwritten data would be more similar to what I imagine being a real-world authentication scenario if performed on a task where a class is a single person writing multiple characters, as opposed to characters being classes. Minor comments: * Page 6, second to last row: there is a*the* repeated twice.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOptimal Strategies Against Generative Attacks\nGenerative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a ``fake reality**? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches, and show that they result in models that are more robust to various attacks on real-world data. Code will be made publicly available upon publication.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper addresses the issue of malicious use of generative models to fool authentication/anomaly detection systems that rely on sensor data. The authors formulate the scenario as a maxmin game between an authenticator and an attacker, with limitations on the number of samples available to the authenticator to fix a decision rule, the number of samples required at test time for the authenticator to take a decision and the number of leaked samples the attacker has access to. The authors prove that the game admits a Nash equilibrium and derive a closed form solution for the case of multivariate Gaussian data. Finally, the authors propose an algorithm called *GAN In the Middle* and perform experiments to show consistency with the theoretical results, better authentication performance than state of the art methods and usability for data augmentation. This pager should be accepted. Overall, it addresses crucial problems with the recent advances of generative models and provides significant theoretical results. The experiments would benefit from some clarification. For the experiments, the following should be addressed: * Confidence intervals for all results (specifically in Figure 1a and in Table 1) * Figure 1a: it would be interesting to see a similar analysis also for other values of m and k. * Table 1: This result would also be more supporting if experiments were performed for varying values of m, n and k. The description of the RS attack could be made more precise: does it mean that the attacker samples images at random? Intuitively, it feels confusing that the GIM authenticator would perform worse on this setting. * The experiments on handwritten data would be more similar to what I imagine being a real-world authentication scenario if performed on a task where a class is a single person writing multiple characters, as opposed to characters being classes. Minor comments: * Page 6, second to last row: there is a*the* repeated twice.", "role": "assistant" } ]
[ "This paper addresses the issue of malicious use of generative models to fool authentication/anomaly detection systems that rely on sensor data.", "The authors formulate the scenario as a maxmin game between an authenticator and an attacker, with limitations on the number of samples available to the authenticator to fix a decision rule, the number of samples required at test time for the authenticator to take a decision and the number of leaked samples the attacker has access to.", "The authors prove that the game admits a Nash equilibrium and derive a closed form solution for the case of multivariate Gaussian data.", "Finally, the authors propose an algorithm called *GAN In the Middle* and perform experiments to show consistency with the theoretical results, better authentication performance than state of the art methods and usability for data augmentation.", "This pager should be accepted.", "Overall, it addresses crucial problems with the recent advances of generative models and provides significant theoretical results.", "The experiments would benefit from some clarification.", "For the experiments, the following should be addressed: * Confidence intervals for all results (specifically in Figure 1a and in Table 1) * Figure 1a: it would be interesting to see a similar analysis also for other values of m and k. * Table 1: This result would also be more supporting if experiments were performed for varying values of m, n and k. The description of the RS attack could be made more precise: does it mean that the attacker samples images at random?", "Intuitively, it feels confusing that the GIM authenticator would perform worse on this setting.", "* The experiments on handwritten data would be more similar to what I imagine being a real-world authentication scenario if performed on a task where a class is a single person writing multiple characters, as opposed to characters being classes.", "Minor comments: * Page 6, second to last row: there is a*the* repeated twice." ]
{ "criticism": 1, "example": 2, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 2, "presentation_and_reporting": 5, "results_and_discussion": 4, "suggestion_and_solution": 4, "total": 11 }
0.090909
0.181818
0.090909
0.636364
0.181818
0.454545
0.363636
0.363636
11
1
1
1
1
1
1
1
1
2.363636
2.300507
0.06313
Optimal Strategies Against Generative Attacks
Generative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a ``fake reality**? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches, and show that they result in models that are more robust to various attacks on real-world data. Code will be made publicly available upon publication.
*Optimal Strategies Against Generative Attacks* describes just what the title implies - various dimensions of the problem of defending against a generative adversary, with theoretical discussion under limited settings, as well as practical experiments extending on the intuitions gained using the theoretical exploration under limited conditions. Particularly, one of the key stated goals of the paper is to *construct a theorectical framework for studying the security risk arising from generative models, and explore its practical implications*. Given this goal, the paper performs admirably. The appendix is extensive, and gives a lot more insight into the core paper itself. I am unclear on how the data augmentation experiment fits into the overall picture - perhaps a more detailed explanation of how and why this would be used to form an *attack* would help. The other experiments are sensible, and demonstrate reasonable and expected results. This is a solid paper, and most of my critiques are *out of scope* and revolve around experiments that would be nice to see. Though GAN-for-text is not simple, showing this type of setup for text would be interesting for a number of reasons, same for audio. Continual passes through the text, with a focus on clarity could also be helpful - the topic is dense, and the text does a good job describing what is happening, but it is always possible to further distill these complex topics, and relegate some useful-but-not-critical pieces to the appendix. These are minor quibbles, and overall this paper was an interesting and useful read, on a relatively underexplored topic. It shows theorectical results, and practical experimental demonstrations of the theories proposed. Given the stated goals of the paper, it performs admirably.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOptimal Strategies Against Generative Attacks\nGenerative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a ``fake reality**? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches, and show that they result in models that are more robust to various attacks on real-world data. Code will be made publicly available upon publication.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n*Optimal Strategies Against Generative Attacks* describes just what the title implies - various dimensions of the problem of defending against a generative adversary, with theoretical discussion under limited settings, as well as practical experiments extending on the intuitions gained using the theoretical exploration under limited conditions. Particularly, one of the key stated goals of the paper is to *construct a theorectical framework for studying the security risk arising from generative models, and explore its practical implications*. Given this goal, the paper performs admirably. The appendix is extensive, and gives a lot more insight into the core paper itself. I am unclear on how the data augmentation experiment fits into the overall picture - perhaps a more detailed explanation of how and why this would be used to form an *attack* would help. The other experiments are sensible, and demonstrate reasonable and expected results. This is a solid paper, and most of my critiques are *out of scope* and revolve around experiments that would be nice to see. Though GAN-for-text is not simple, showing this type of setup for text would be interesting for a number of reasons, same for audio. Continual passes through the text, with a focus on clarity could also be helpful - the topic is dense, and the text does a good job describing what is happening, but it is always possible to further distill these complex topics, and relegate some useful-but-not-critical pieces to the appendix. These are minor quibbles, and overall this paper was an interesting and useful read, on a relatively underexplored topic. It shows theorectical results, and practical experimental demonstrations of the theories proposed. Given the stated goals of the paper, it performs admirably.", "role": "assistant" } ]
[ "*Optimal Strategies Against Generative Attacks* describes just what the title implies - various dimensions of the problem of defending against a generative adversary, with theoretical discussion under limited settings, as well as practical experiments extending on the intuitions gained using the theoretical exploration under limited conditions.", "Particularly, one of the key stated goals of the paper is to *construct a theorectical framework for studying the security risk arising from generative models, and explore its practical implications*.", "Given this goal, the paper performs admirably.", "The appendix is extensive, and gives a lot more insight into the core paper itself.", "I am unclear on how the data augmentation experiment fits into the overall picture - perhaps a more detailed explanation of how and why this would be used to form an *attack* would help.", "The other experiments are sensible, and demonstrate reasonable and expected results.", "This is a solid paper, and most of my critiques are *out of scope* and revolve around experiments that would be nice to see.", "Though GAN-for-text is not simple, showing this type of setup for text would be interesting for a number of reasons, same for audio.", "Continual passes through the text, with a focus on clarity could also be helpful - the topic is dense, and the text does a good job describing what is happening, but it is always possible to further distill these complex topics, and relegate some useful-but-not-critical pieces to the appendix.", "These are minor quibbles, and overall this paper was an interesting and useful read, on a relatively underexplored topic.", "It shows theorectical results, and practical experimental demonstrations of the theories proposed.", "Given the stated goals of the paper, it performs admirably." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 5, "praise": 8, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 12 }
0.166667
0
0.416667
0.416667
0.666667
0.25
0.25
0.25
12
1
0
1
1
1
1
1
1
2.416667
2.400884
0.015782
Mathematical Reasoning in Latent Space
We design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space. The set of rewrites (i.e. transformations) that can be successfully performed on a statement represents essential semantic features of the statement. We can compress this information by embedding the formula in a vector space, such that the vector associated with a statement can be used to predict whether a statement can be rewritten by other theorems. Predicting the embedding of a formula generated by some rewrite rule is naturally viewed as approximate reasoning in the latent space. In order to measure the effectiveness of this reasoning, we perform approximate deduction sequences in the latent space and use the resulting embedding to inform the semantic features of the corresponding formal statement (which is obtained by performing the corresponding rewrite sequence using real formulas). Our experiments show that graph neural networks can make non-trivial predictions about the rewrite-success of statements, even when they propagate predicted latent representations for several steps. Since our corpus of mathematical formulas includes a wide variety of mathematical disciplines, this experiment is a strong indicator for the feasibility of deduction in latent space in general.
= Summary Embeddings of mathematical theorems and rewrite rules are presented. An in-depth analysis of the resulting embeddings is presented, showing that a network can learn to *apply* embedded rewrite rules to embedded theorems, yielding results that are similar to the embedding of the rewritten theorem. [i.e., app*(emb(thm), emb(rule)) is near to emb(app(thm, rule))] This is an interesting property for the application of deep learning to automated theorem proving, though not directly a breakthrough result. = Strong/Weak Points + Simply a cute result, showing that proof search can remain in embedding space for a limited time horizon without having to switch back into the theorem prover environment. + Nicely designed experiments testing this (somewhat surprising) property empirically - Missed opportunity of better analysis of which theorem/rewrite rule properties are more likely to fail - Writing sometimes a bit overcomplicated (e.g., Sect. 4.5 could just be a figure of a commuting diagram and two sentences...) - Architecture choice unclear: Why are and separate networks. This is discussed on p4, but it*s unclear to me how keeping separate is benefitial for the analysis, and this is not picked up again explicitly again? = Recommendation Overall, this is a nice, somewhat surprising result. The writing and experiments could use some improvement, but I believe that the majority of the ICLR audience would enjoy seeing this result (even though it would have no impact on most people*s research) = Detailed Comments - page 4, Sect. 4.4: Architecture of would be nice (more than a linear layer?) - page 5, paragraph 3: *we from some* -> *we start from some* - p6par1: *much cheaper then computing* -> than - p6par6: *on formulas that with* -> no that - p6par7: *measure how rate* -> *measure the rate* - p8par1: *approximate embedding - is undefined and should probably be (this is also the case in the caption of Fig. 5), and should probably be included as well. However, I don*t understand the use of here. If Fig. 4 is following Fig. 3 in considering , then Fig. 4 should plot the performance of, e.g., (i.e., applied to approximate embedding of and (*true*) embedding of ). I believe that*s what *Pred (One Step)* expresses, but it would maybe be generally helpful to be more precise about the notation in Sect. 6.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMathematical Reasoning in Latent Space\nWe design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space. The set of rewrites (i.e. transformations) that can be successfully performed on a statement represents essential semantic features of the statement. We can compress this information by embedding the formula in a vector space, such that the vector associated with a statement can be used to predict whether a statement can be rewritten by other theorems. Predicting the embedding of a formula generated by some rewrite rule is naturally viewed as approximate reasoning in the latent space. In order to measure the effectiveness of this reasoning, we perform approximate deduction sequences in the latent space and use the resulting embedding to inform the semantic features of the corresponding formal statement (which is obtained by performing the corresponding rewrite sequence using real formulas). Our experiments show that graph neural networks can make non-trivial predictions about the rewrite-success of statements, even when they propagate predicted latent representations for several steps. Since our corpus of mathematical formulas includes a wide variety of mathematical disciplines, this experiment is a strong indicator for the feasibility of deduction in latent space in general.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n= Summary Embeddings of mathematical theorems and rewrite rules are presented. An in-depth analysis of the resulting embeddings is presented, showing that a network can learn to *apply* embedded rewrite rules to embedded theorems, yielding results that are similar to the embedding of the rewritten theorem. [i.e., app*(emb(thm), emb(rule)) is near to emb(app(thm, rule))] This is an interesting property for the application of deep learning to automated theorem proving, though not directly a breakthrough result. = Strong/Weak Points + Simply a cute result, showing that proof search can remain in embedding space for a limited time horizon without having to switch back into the theorem prover environment. + Nicely designed experiments testing this (somewhat surprising) property empirically - Missed opportunity of better analysis of which theorem/rewrite rule properties are more likely to fail - Writing sometimes a bit overcomplicated (e.g., Sect. 4.5 could just be a figure of a commuting diagram and two sentences...) - Architecture choice unclear: Why are and separate networks. This is discussed on p4, but it*s unclear to me how keeping separate is benefitial for the analysis, and this is not picked up again explicitly again? = Recommendation Overall, this is a nice, somewhat surprising result. The writing and experiments could use some improvement, but I believe that the majority of the ICLR audience would enjoy seeing this result (even though it would have no impact on most people*s research) = Detailed Comments - page 4, Sect. 4.4: Architecture of would be nice (more than a linear layer?) - page 5, paragraph 3: *we from some* -> *we start from some* - p6par1: *much cheaper then computing* -> than - p6par6: *on formulas that with* -> no that - p6par7: *measure how rate* -> *measure the rate* - p8par1: *approximate embedding - is undefined and should probably be (this is also the case in the caption of Fig. 5), and should probably be included as well. However, I don*t understand the use of here. If Fig. 4 is following Fig. 3 in considering , then Fig. 4 should plot the performance of, e.g., (i.e., applied to approximate embedding of and (*true*) embedding of ). I believe that*s what *Pred (One Step)* expresses, but it would maybe be generally helpful to be more precise about the notation in Sect. 6.", "role": "assistant" } ]
[ "= Summary Embeddings of mathematical theorems and rewrite rules are presented.", "An in-depth analysis of the resulting embeddings is presented, showing that a network can learn to *apply* embedded rewrite rules to embedded theorems, yielding results that are similar to the embedding of the rewritten theorem.", "[i.e., app*(emb(thm), emb(rule)) is near to emb(app(thm, rule))] This is an interesting property for the application of deep learning to automated theorem proving, though not directly a breakthrough result.", "= Strong/Weak Points + Simply a cute result, showing that proof search can remain in embedding space for a limited time horizon without having to switch back into the theorem prover environment.", "+ Nicely designed experiments testing this (somewhat surprising) property empirically - Missed opportunity of better analysis of which theorem/rewrite rule properties are more likely to fail - Writing sometimes a bit overcomplicated (e.g., Sect.", "4.5 could just be a figure of a commuting diagram and two sentences...) - Architecture choice unclear: Why are and separate networks.", "This is discussed on p4, but it*s unclear to me how keeping separate is benefitial for the analysis, and this is not picked up again explicitly again?", "= Recommendation Overall, this is a nice, somewhat surprising result.", "The writing and experiments could use some improvement, but I believe that the majority of the ICLR audience would enjoy seeing this result (even though it would have no impact on most people*s research) = Detailed Comments - page 4, Sect.", "4.4: Architecture of would be nice (more than a linear layer?)", "- page 5, paragraph 3: *we from some* -> *we start from some* - p6par1: *much cheaper then computing* -> than - p6par6: *on formulas that with* -> no that - p6par7: *measure how rate* -> *measure the rate* - p8par1: *approximate embedding - is undefined and should probably be (this is also the case in the caption of Fig.", "5), and should probably be included as well.", "However, I don*t understand the use of here.", "If Fig.", "4 is following Fig.", "3 in considering , then Fig.", "4 should plot the performance of, e.g., (i.e., applied to approximate embedding of and (*true*) embedding of ).", "I believe that*s what *Pred (One Step)* expresses, but it would maybe be generally helpful to be more precise about the notation in Sect.", "6." ]
{ "criticism": 2, "example": 7, "importance_and_relevance": 1, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 10, "results_and_discussion": 6, "suggestion_and_solution": 7, "total": 19 }
0.105263
0.368421
0.052632
0.263158
0.157895
0.526316
0.315789
0.368421
19
1
1
1
1
1
1
1
1
2.157895
1.589727
0.568167
Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks
While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on two realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.
The paper proposes a Bayesian approach for meta learning in settings were the tasks might be OOD or have imbalanced class distribution. The proposed approach has 3 task-specific balancing variables with a prior and an inference network. Using an amortized inference scheme, the model unifies the meta-learning objective loss with the lower bound of probabilistic model marginal likelihood. The paper is well-written and well-motivated. I only have some minor comments and questions: - Can you add the standard errors for the results in Section 5.2 (maybe at least in the Appendix)? - Specifically it would be interesting to see if the results for analyzing the class imbalance variable are statistically significant; specially in light of the recent work on the effects of importance weighting in DL (see “What is the Effect of Importance Weighting in Deep Learning?” by Byrd and Lipton) which essentially question the value of importance weighting for handling class imbalance in various DL settings. - For the experiments in Section 5.1 can you also report the values for the three task-specific balancing variables? (Maybe in the Appendix). Minor: “obtains significantly improves over” -> *significantly improves over* Overall, I found the paper interesting and practically useful, although I believe some additions to the empirical evaluation can improve the impact of the paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks\nWhile tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on two realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a Bayesian approach for meta learning in settings were the tasks might be OOD or have imbalanced class distribution. The proposed approach has 3 task-specific balancing variables with a prior and an inference network. Using an amortized inference scheme, the model unifies the meta-learning objective loss with the lower bound of probabilistic model marginal likelihood. The paper is well-written and well-motivated. I only have some minor comments and questions: - Can you add the standard errors for the results in Section 5.2 (maybe at least in the Appendix)? - Specifically it would be interesting to see if the results for analyzing the class imbalance variable are statistically significant; specially in light of the recent work on the effects of importance weighting in DL (see “What is the Effect of Importance Weighting in Deep Learning?” by Byrd and Lipton) which essentially question the value of importance weighting for handling class imbalance in various DL settings. - For the experiments in Section 5.1 can you also report the values for the three task-specific balancing variables? (Maybe in the Appendix). Minor: “obtains significantly improves over” -> *significantly improves over* Overall, I found the paper interesting and practically useful, although I believe some additions to the empirical evaluation can improve the impact of the paper.", "role": "assistant" } ]
[ "The paper proposes a Bayesian approach for meta learning in settings were the tasks might be OOD or have imbalanced class distribution.", "The proposed approach has 3 task-specific balancing variables with a prior and an inference network.", "Using an amortized inference scheme, the model unifies the meta-learning objective loss with the lower bound of probabilistic model marginal likelihood.", "The paper is well-written and well-motivated.", "I only have some minor comments and questions: - Can you add the standard errors for the results in Section 5.2 (maybe at least in the Appendix)?", "- Specifically it would be interesting to see if the results for analyzing the class imbalance variable are statistically significant; specially in light of the recent work on the effects of importance weighting in DL (see “What is the Effect of Importance Weighting in Deep Learning?” by Byrd and Lipton) which essentially question the value of importance weighting for handling class imbalance in various DL settings.", "- For the experiments in Section 5.1 can you also report the values for the three task-specific balancing variables?", "(Maybe in the Appendix).", "Minor: “obtains significantly improves over” -> *significantly improves over* Overall, I found the paper interesting and practically useful, although I believe some additions to the empirical evaluation can improve the impact of the paper." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 5, "suggestion_and_solution": 4, "total": 9 }
0
0
0.222222
0.666667
0.222222
0.333333
0.555556
0.444444
9
0
0
1
1
1
1
1
1
2.444444
2.191926
0.252519
Reformer: The Efficient Transformer
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L^2) to O(L), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
This manuscript presents a number of algorithmic techniques to reduce the computational and space complexity of Transformer, a powerful and very popular deep learning model for natural language processing (NLP). Although Transformer has revolutionized the field of NLP, many small groups cannot make a full use of it due to lack of necessary computational resources. As such, it is very important to improve the space and computational complexity of this popular deep model. The techniques presented in this manuscript seem to be very reasonable and the experimental results also indicate that they are effective. My major concern is that the authors shall present more detailed experimental results. In addition to bits per dim, it will also better if the authors can evaluate the performance in terms of other metrics.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nReformer: The Efficient Transformer\nLarge Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L^2) to O(L), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis manuscript presents a number of algorithmic techniques to reduce the computational and space complexity of Transformer, a powerful and very popular deep learning model for natural language processing (NLP). Although Transformer has revolutionized the field of NLP, many small groups cannot make a full use of it due to lack of necessary computational resources. As such, it is very important to improve the space and computational complexity of this popular deep model. The techniques presented in this manuscript seem to be very reasonable and the experimental results also indicate that they are effective. My major concern is that the authors shall present more detailed experimental results. In addition to bits per dim, it will also better if the authors can evaluate the performance in terms of other metrics.", "role": "assistant" } ]
[ "This manuscript presents a number of algorithmic techniques to reduce the computational and space complexity of Transformer, a powerful and very popular deep learning model for natural language processing (NLP).", "Although Transformer has revolutionized the field of NLP, many small groups cannot make a full use of it due to lack of necessary computational resources.", "As such, it is very important to improve the space and computational complexity of this popular deep model.", "The techniques presented in this manuscript seem to be very reasonable and the experimental results also indicate that they are effective.", "My major concern is that the authors shall present more detailed experimental results.", "In addition to bits per dim, it will also better if the authors can evaluate the performance in terms of other metrics." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 6 }
0
0
0.333333
1
0.166667
0
0.5
0.5
6
0
0
1
1
1
0
1
1
2.5
1.726661
0.773339
Intrinsically Motivated Discovery of Diverse Patterns in Self-Organizing Systems
In many complex dynamical systems, artificial or natural, one can observe self-organization of patterns emerging from local rules. Cellular automata, like the Game of Life (GOL), have been widely used as abstract models enabling the study of various aspects of self-organization and morphogenesis, such as the emergence of spatially localized patterns. However, findings of self-organized patterns in such models have so far relied on manual tuning of parameters and initial states, and on the human eye to identify interesting patterns. In this paper, we formulate the problem of automated discovery of diverse self-organized patterns in such high-dimensional complex dynamical systems, as well as a framework for experimentation and evaluation. Using a continuous GOL as a testbed, we show that recent intrinsically-motivated machine learning algorithms (POP-IMGEPs), initially developed for learning of inverse models in robotics, can be transposed and used in this novel application area. These algorithms combine intrinsically-motivated goal exploration and unsupervised learning of goal space representations. Goal space representations describe the interesting features of patterns for which diverse variations should be discovered. In particular, we compare various approaches to define and learn goal space representations from the perspective of discovering diverse spatially localized patterns. Moreover, we introduce an extension of a state-of-the-art POP-IMGEP algorithm which incrementally learns a goal representation using a deep auto-encoder, and the use of CPPN primitives for generating initialization parameters. We show that it is more efficient than several baselines and equally efficient as a system pre-trained on a hand-made database of patterns identified by human experts.
The focus of the presented paper is on formulating the automated discovery of self-organized patterns in high-dimensional dynamic systems. The introduced framework uses cellular automata (game of life) as a testbed for experimentation and evaluation and existing machine learning algorithms (POP-IMGEPs). The goal of the paper is to show that these algorithms can be used to discover and represent features of patterns. Moreover, an extension of SOTA algorithms is introduced and several approaches to define goal space representations are compared. Overall, I have the impression this is an interesting paper that could be accepted to ICLR. The idea of applying IMGEPs to explore parameters of a dynamic system is novel and interesting, which could also simulate further research in this field. Furthermore, the paper well-written, technically sound, and the results are interesting. The overall contribution of the paper is in applying IMGEP algorithms to exploring parameters of dynamic systems and in comparing different algorithms along with an extensive set of experiments. As a point of criticism, a lot of (interesting) material was pushed to the Appendix. Resolving the references makes reading the paper harder. Moreover, given that this paper has more than 35 pages appendix material, it seems this work would better be suited for a journal as for a conference. There is a reason for papers to have a page limit and this work circumvents this limit by presenting a lot of additional material. Therefore, I am not willing to strongly support this work. Specific Comments: - Section 3.1: It is not clear how the initial system state is established. In Section 3.1. the text states that *parameters are randomly sampled and explored* before the process starts, but it is not clear why a random sampling is used and what this means for the subsequent sampling. Later in the text (3.3) it becomes more clear, but here this appears too unclear. - Section 3.1: *distribution over a hypercube in mathcal{T} chosen to be large enough to bias exploration towards the frontiers of known goals to incentivize diversity.* This sentence is not clear and needs more details. How is the distribution chosen exactly? - Section 3.2 appears a bit repetitive and could be more concise. I don*t think it is necessary here to contrast manual vs learned features of the goal space. - Section 3.2 (P3): the last sentence of this paragraph reads as if there exists no approaches for VEAs in online settings. This should be toned down or backed up by a reference. - Section 3.2: (last sentence): it is not clear how the history is used exactly to train the network. Which strategy is used to sample from the history of observations? - Section 3.3: What is meant by *The CPPNs are used of the parameters {theta}*? The details provided after this sentence are not clear and need more details. - Section 4.2: Please provide more details what *very large* dataset means. - Section 4.2: *HGS algorithm* is not defined. - Section 5: It seems unnecessary to explain what t-SNE does as a method.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIntrinsically Motivated Discovery of Diverse Patterns in Self-Organizing Systems\nIn many complex dynamical systems, artificial or natural, one can observe self-organization of patterns emerging from local rules. Cellular automata, like the Game of Life (GOL), have been widely used as abstract models enabling the study of various aspects of self-organization and morphogenesis, such as the emergence of spatially localized patterns. However, findings of self-organized patterns in such models have so far relied on manual tuning of parameters and initial states, and on the human eye to identify interesting patterns. In this paper, we formulate the problem of automated discovery of diverse self-organized patterns in such high-dimensional complex dynamical systems, as well as a framework for experimentation and evaluation. Using a continuous GOL as a testbed, we show that recent intrinsically-motivated machine learning algorithms (POP-IMGEPs), initially developed for learning of inverse models in robotics, can be transposed and used in this novel application area. These algorithms combine intrinsically-motivated goal exploration and unsupervised learning of goal space representations. Goal space representations describe the interesting features of patterns for which diverse variations should be discovered. In particular, we compare various approaches to define and learn goal space representations from the perspective of discovering diverse spatially localized patterns. Moreover, we introduce an extension of a state-of-the-art POP-IMGEP algorithm which incrementally learns a goal representation using a deep auto-encoder, and the use of CPPN primitives for generating initialization parameters. We show that it is more efficient than several baselines and equally efficient as a system pre-trained on a hand-made database of patterns identified by human experts.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe focus of the presented paper is on formulating the automated discovery of self-organized patterns in high-dimensional dynamic systems. The introduced framework uses cellular automata (game of life) as a testbed for experimentation and evaluation and existing machine learning algorithms (POP-IMGEPs). The goal of the paper is to show that these algorithms can be used to discover and represent features of patterns. Moreover, an extension of SOTA algorithms is introduced and several approaches to define goal space representations are compared. Overall, I have the impression this is an interesting paper that could be accepted to ICLR. The idea of applying IMGEPs to explore parameters of a dynamic system is novel and interesting, which could also simulate further research in this field. Furthermore, the paper well-written, technically sound, and the results are interesting. The overall contribution of the paper is in applying IMGEP algorithms to exploring parameters of dynamic systems and in comparing different algorithms along with an extensive set of experiments. As a point of criticism, a lot of (interesting) material was pushed to the Appendix. Resolving the references makes reading the paper harder. Moreover, given that this paper has more than 35 pages appendix material, it seems this work would better be suited for a journal as for a conference. There is a reason for papers to have a page limit and this work circumvents this limit by presenting a lot of additional material. Therefore, I am not willing to strongly support this work. Specific Comments: - Section 3.1: It is not clear how the initial system state is established. In Section 3.1. the text states that *parameters are randomly sampled and explored* before the process starts, but it is not clear why a random sampling is used and what this means for the subsequent sampling. Later in the text (3.3) it becomes more clear, but here this appears too unclear. - Section 3.1: *distribution over a hypercube in mathcal{T} chosen to be large enough to bias exploration towards the frontiers of known goals to incentivize diversity.* This sentence is not clear and needs more details. How is the distribution chosen exactly? - Section 3.2 appears a bit repetitive and could be more concise. I don*t think it is necessary here to contrast manual vs learned features of the goal space. - Section 3.2 (P3): the last sentence of this paragraph reads as if there exists no approaches for VEAs in online settings. This should be toned down or backed up by a reference. - Section 3.2: (last sentence): it is not clear how the history is used exactly to train the network. Which strategy is used to sample from the history of observations? - Section 3.3: What is meant by *The CPPNs are used of the parameters {theta}*? The details provided after this sentence are not clear and need more details. - Section 4.2: Please provide more details what *very large* dataset means. - Section 4.2: *HGS algorithm* is not defined. - Section 5: It seems unnecessary to explain what t-SNE does as a method.", "role": "assistant" } ]
[ "The focus of the presented paper is on formulating the automated discovery of self-organized patterns in high-dimensional dynamic systems.", "The introduced framework uses cellular automata (game of life) as a testbed for experimentation and evaluation and existing machine learning algorithms (POP-IMGEPs).", "The goal of the paper is to show that these algorithms can be used to discover and represent features of patterns.", "Moreover, an extension of SOTA algorithms is introduced and several approaches to define goal space representations are compared.", "Overall, I have the impression this is an interesting paper that could be accepted to ICLR.", "The idea of applying IMGEPs to explore parameters of a dynamic system is novel and interesting, which could also simulate further research in this field.", "Furthermore, the paper well-written, technically sound, and the results are interesting.", "The overall contribution of the paper is in applying IMGEP algorithms to exploring parameters of dynamic systems and in comparing different algorithms along with an extensive set of experiments.", "As a point of criticism, a lot of (interesting) material was pushed to the Appendix.", "Resolving the references makes reading the paper harder.", "Moreover, given that this paper has more than 35 pages appendix material, it seems this work would better be suited for a journal as for a conference.", "There is a reason for papers to have a page limit and this work circumvents this limit by presenting a lot of additional material.", "Therefore, I am not willing to strongly support this work.", "Specific Comments: - Section 3.1: It is not clear how the initial system state is established.", "In Section 3.1. the text states that *parameters are randomly sampled and explored* before the process starts, but it is not clear why a random sampling is used and what this means for the subsequent sampling.", "Later in the text (3.3) it becomes more clear, but here this appears too unclear.", "- Section 3.1: *distribution over a hypercube in mathcal{T} chosen to be large enough to bias exploration towards the frontiers of known goals to incentivize diversity.", "* This sentence is not clear and needs more details.", "How is the distribution chosen exactly?", "- Section 3.2 appears a bit repetitive and could be more concise.", "I don*t think it is necessary here to contrast manual vs learned features of the goal space.", "- Section 3.2 (P3): the last sentence of this paragraph reads as if there exists no approaches for VEAs in online settings.", "This should be toned down or backed up by a reference.", "- Section 3.2: (last sentence): it is not clear how the history is used exactly to train the network.", "Which strategy is used to sample from the history of observations?", "- Section 3.3: What is meant by *The CPPNs are used of the parameters {theta}*?", "The details provided after this sentence are not clear and need more details.", "- Section 4.2: Please provide more details what *very large* dataset means.", "- Section 4.2: *HGS algorithm* is not defined.", "- Section 5: It seems unnecessary to explain what t-SNE does as a method." ]
{ "criticism": 11, "example": 7, "importance_and_relevance": 8, "materials_and_methods": 15, "praise": 5, "presentation_and_reporting": 13, "results_and_discussion": 1, "suggestion_and_solution": 7, "total": 30 }
0.366667
0.233333
0.266667
0.5
0.166667
0.433333
0.033333
0.233333
30
1
1
1
1
1
1
1
1
2.233333
-2.327789
4.561122
SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference
We present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost. of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state-of-the-art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 twice as fast in wall-time. For the scenarios we consider, a 40% to 80% cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out.
The paper proposes a new reinforcement learning agent architecture which is significantly faster and way less costly than previously distributed architectures. To this end, the paper proposes a new architecture that utilizes modern accelerators more efficiently. The paper reads very well and the experimental results indeed demonstrate improvement. Nevertheless, even though working in deep learning for years and have also some experience with Reinforcement learning I am not in the position to provide an expert judgment on the novelty of the work. I do not know if ICLR is the right place of the paper (I would probably suggest a system architectures conference for better assessment of the work).
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference\nWe present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost. of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state-of-the-art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 twice as fast in wall-time. For the scenarios we consider, a 40% to 80% cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a new reinforcement learning agent architecture which is significantly faster and way less costly than previously distributed architectures. To this end, the paper proposes a new architecture that utilizes modern accelerators more efficiently. The paper reads very well and the experimental results indeed demonstrate improvement. Nevertheless, even though working in deep learning for years and have also some experience with Reinforcement learning I am not in the position to provide an expert judgment on the novelty of the work. I do not know if ICLR is the right place of the paper (I would probably suggest a system architectures conference for better assessment of the work).", "role": "assistant" } ]
[ "The paper proposes a new reinforcement learning agent architecture which is significantly faster and way less costly than previously distributed architectures.", "To this end, the paper proposes a new architecture that utilizes modern accelerators more efficiently.", "The paper reads very well and the experimental results indeed demonstrate improvement.", "Nevertheless, even though working in deep learning for years and have also some experience with Reinforcement learning I am not in the position to provide an expert judgment on the novelty of the work.", "I do not know if ICLR is the right place of the paper (I would probably suggest a system architectures conference for better assessment of the work)." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 4, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 5 }
0.4
0
0.6
0.8
0.2
0
0.2
0.2
5
1
0
1
1
1
0
1
1
2.4
1.389924
1.010076
Meta-Learning with Warped Gradient Descent
Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning. Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule. Both these approaches pose challenges. On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour. On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation. In this work we propose Warped Gradient Descent (WarpGrad), a method that intersects these approaches to mitigate their limitations. WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution. Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner. Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates. WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems. We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning.
The authors propose warped gradient descent (WarpGrad) an optimisation framework for facilitating gradient-based meta-learning. WarpGrad interleaves within the learner meta-learned warp-layers that implicitly precondition the gradients of the task-specific parameters during backpropagation. In contrast to the linear projection layers employed in T-Nets, warp-layers are unrestricted in form and induce a full Jacobian preconditioning matrix. The warp layers are meta-learned in a trajectory-agnostic fashion, thus obviating the need to backpropagate through the gradient steps to compute the updates of their parameters. The framework is readily applicable to standard gradient-based meta-learners, and is shown to yield a significant boost in performance on both few-shot and multi-shot learning tasks, as well as to have promising applications to continual learning. The paper is well-structured and well-motivated: the problem statement is clearly laid out from the outset, with appropriate context, and explanations supported well diagramatically. The idea, and perhaps more so the applications thereof, is seemingly novel and its explanation is given straightforwardly while avoiding getting bogged down in technical details. Clear comparisons and distinctions with previous work are drawn - for instance with the update rules for several gradient-based methods - MAML and its derivatives - being laid out in standard form (though it might also be nice to echo this with the WarpGrad update rule). The experiments are logically ordered with the initial set covering the standard few-shot learning benchmarks with appropriate baselines (though the results for few-shot tieredImageNet are lacking in this respect), with most essential details given in the main text and full details, including those related to the datasets in question and hyperparameter selection, documented in Appendix H. Meta-learning does seem uniquely well-positioned for tackling the task of continual learning and it*s heartening to see this being explored here with a degree of success - it would be interested to see how its performance compares with standard continual learning methods (such as EWC) on the same task. Particularly impressive is the depth into which the Appendices regarding the experiments, both elaborating on the details given in the main text as well as additional ablation studies. Minor errors: - Page 7: *a neural network that dynamically **adapt** the parameters...* - should be *adapts* - Page 22: *where is the **identify** matrix* - should be *identity* - Page 27: *The task target function is **partition** into 5 sets of **sub-task*** - should be *partitioned* and *sub-tasks*, respectively
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMeta-Learning with Warped Gradient Descent\nLearning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning. Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule. Both these approaches pose challenges. On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour. On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation. In this work we propose Warped Gradient Descent (WarpGrad), a method that intersects these approaches to mitigate their limitations. WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution. Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner. Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates. WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems. We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose warped gradient descent (WarpGrad) an optimisation framework for facilitating gradient-based meta-learning. WarpGrad interleaves within the learner meta-learned warp-layers that implicitly precondition the gradients of the task-specific parameters during backpropagation. In contrast to the linear projection layers employed in T-Nets, warp-layers are unrestricted in form and induce a full Jacobian preconditioning matrix. The warp layers are meta-learned in a trajectory-agnostic fashion, thus obviating the need to backpropagate through the gradient steps to compute the updates of their parameters. The framework is readily applicable to standard gradient-based meta-learners, and is shown to yield a significant boost in performance on both few-shot and multi-shot learning tasks, as well as to have promising applications to continual learning. The paper is well-structured and well-motivated: the problem statement is clearly laid out from the outset, with appropriate context, and explanations supported well diagramatically. The idea, and perhaps more so the applications thereof, is seemingly novel and its explanation is given straightforwardly while avoiding getting bogged down in technical details. Clear comparisons and distinctions with previous work are drawn - for instance with the update rules for several gradient-based methods - MAML and its derivatives - being laid out in standard form (though it might also be nice to echo this with the WarpGrad update rule). The experiments are logically ordered with the initial set covering the standard few-shot learning benchmarks with appropriate baselines (though the results for few-shot tieredImageNet are lacking in this respect), with most essential details given in the main text and full details, including those related to the datasets in question and hyperparameter selection, documented in Appendix H. Meta-learning does seem uniquely well-positioned for tackling the task of continual learning and it*s heartening to see this being explored here with a degree of success - it would be interested to see how its performance compares with standard continual learning methods (such as EWC) on the same task. Particularly impressive is the depth into which the Appendices regarding the experiments, both elaborating on the details given in the main text as well as additional ablation studies. Minor errors: - Page 7: *a neural network that dynamically **adapt** the parameters...* - should be *adapts* - Page 22: *where is the **identify** matrix* - should be *identity* - Page 27: *The task target function is **partition** into 5 sets of **sub-task*** - should be *partitioned* and *sub-tasks*, respectively", "role": "assistant" } ]
[ "The authors propose warped gradient descent (WarpGrad) an optimisation framework for facilitating gradient-based meta-learning.", "WarpGrad interleaves within the learner meta-learned warp-layers that implicitly precondition the gradients of the task-specific parameters during backpropagation.", "In contrast to the linear projection layers employed in T-Nets, warp-layers are unrestricted in form and induce a full Jacobian preconditioning matrix.", "The warp layers are meta-learned in a trajectory-agnostic fashion, thus obviating the need to backpropagate through the gradient steps to compute the updates of their parameters.", "The framework is readily applicable to standard gradient-based meta-learners, and is shown to yield a significant boost in performance on both few-shot and multi-shot learning tasks, as well as to have promising applications to continual learning.", "The paper is well-structured and well-motivated: the problem statement is clearly laid out from the outset, with appropriate context, and explanations supported well diagramatically.", "The idea, and perhaps more so the applications thereof, is seemingly novel and its explanation is given straightforwardly while avoiding getting bogged down in technical details.", "Clear comparisons and distinctions with previous work are drawn - for instance with the update rules for several gradient-based methods - MAML and its derivatives - being laid out in standard form (though it might also be nice to echo this with the WarpGrad update rule).", "The experiments are logically ordered with the initial set covering the standard few-shot learning benchmarks with appropriate baselines (though the results for few-shot tieredImageNet are lacking in this respect), with most essential details given in the main text and full details, including those related to the datasets in question and hyperparameter selection, documented in Appendix H. Meta-learning does seem uniquely well-positioned for tackling the task of continual learning and it*s heartening to see this being explored here with a degree of success - it would be interested to see how its performance compares with standard continual learning methods (such as EWC) on the same task.", "Particularly impressive is the depth into which the Appendices regarding the experiments, both elaborating on the details given in the main text as well as additional ablation studies.", "Minor errors: - Page 7: *a neural network that dynamically **adapt** the parameters...* - should be *adapts* - Page 22: *where is the **identify** matrix* - should be *identity* - Page 27: *The task target function is **partition** into 5 sets of **sub-task*** - should be *partitioned* and *sub-tasks*, respectively" ]
{ "criticism": 1, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 9, "praise": 5, "presentation_and_reporting": 5, "results_and_discussion": 1, "suggestion_and_solution": 3, "total": 11 }
0.090909
0.090909
0.181818
0.818182
0.454545
0.454545
0.090909
0.272727
11
1
1
1
1
1
1
1
1
2.454545
2.391416
0.06313