title
stringlengths
2
153
abstract
stringlengths
0
2.15k
review
stringlengths
97
8.43k
source
stringclasses
4 values
review_src
stringclasses
1 value
thinking_trace
stringclasses
1 value
conversations
listlengths
2
2
review_sentences
sequencelengths
1
61
criteria_count
dict
criticism
float64
0
1
example
float64
0
1
importance_and_relevance
float64
0
1
materials_and_methods
float64
0
1
praise
float64
0
1
presentation_and_reporting
float64
0
1
results_and_discussion
float64
0
1
suggestion_and_solution
float64
0
1
total
int64
1
61
has_criticism
int64
0
1
has_example
int64
0
1
has_importance_and_relevance
int64
0
1
has_materials_and_methods
int64
0
1
has_praise
int64
0
1
has_presentation_and_reporting
int64
0
1
has_results_and_discussion
int64
0
1
has_suggestion_and_solution
int64
0
1
reward_value
float64
2.13
6
reward_value_length_adjusted
float64
-34.13
3.73
length_penalty
float64
0
36.4
Deep Semi-Supervised Anomaly Detection
Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets. Typically anomaly detection is treated as an unsupervised learning problem. In practice however, one may have---in addition to a large set of unlabeled samples---access to a small pool of labeled samples, e.g. a subset verified by some domain expert as being normal or anomalous. Semi-supervised approaches to anomaly detection aim to utilize such labeled samples, but most proposed methods are limited to merely including labeled normal samples. Only a few methods take advantage of labeled anomalies, with existing deep approaches being domain-specific. In this work we present Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection. Using an information-theoretic perspective on anomaly detection, we derive a loss motivated by the idea that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution. We demonstrate in extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data.
Summary of the work - The work proposes a new method two find anomaly (out of distribution) data when some labeled anomalies are given. - The authors apply information theory-derived loss based on that the normal (in distribution) data usually have lower entropy compared to that of the abnormal data. - The paper conducts extensive experiments on MNIST, Fashion-MNIST, and CIFAR 10, with varying the number of labeled anomlies. I think the paper is well written and the experiment seems to support the authors argument. Unfortunately, this field is not overlapped to my research field, and it is hard for me to judge this paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Semi-Supervised Anomaly Detection\nDeep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets. Typically anomaly detection is treated as an unsupervised learning problem. In practice however, one may have---in addition to a large set of unlabeled samples---access to a small pool of labeled samples, e.g. a subset verified by some domain expert as being normal or anomalous. Semi-supervised approaches to anomaly detection aim to utilize such labeled samples, but most proposed methods are limited to merely including labeled normal samples. Only a few methods take advantage of labeled anomalies, with existing deep approaches being domain-specific. In this work we present Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection. Using an information-theoretic perspective on anomaly detection, we derive a loss motivated by the idea that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution. We demonstrate in extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary of the work - The work proposes a new method two find anomaly (out of distribution) data when some labeled anomalies are given. - The authors apply information theory-derived loss based on that the normal (in distribution) data usually have lower entropy compared to that of the abnormal data. - The paper conducts extensive experiments on MNIST, Fashion-MNIST, and CIFAR 10, with varying the number of labeled anomlies. I think the paper is well written and the experiment seems to support the authors argument. Unfortunately, this field is not overlapped to my research field, and it is hard for me to judge this paper.", "role": "assistant" } ]
[ "Summary of the work - The work proposes a new method two find anomaly (out of distribution) data when some labeled anomalies are given.", "- The authors apply information theory-derived loss based on that the normal (in distribution) data usually have lower entropy compared to that of the abnormal data.", "- The paper conducts extensive experiments on MNIST, Fashion-MNIST, and CIFAR 10, with varying the number of labeled anomlies.", "I think the paper is well written and the experiment seems to support the authors argument.", "Unfortunately, this field is not overlapped to my research field, and it is hard for me to judge this paper." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 5 }
0.2
0
0.4
0.8
0.4
0.2
0.2
0
5
1
0
1
1
1
1
1
0
2.2
1.189924
1.010076
Adaptive Structural Fingerprints for Graph Attention Networks
Many real-world data sets are represented as graphs, such as citation links, social media, and biological interaction. The volatile graph structure makes it non-trivial to employ convolutional neural networks (CNN*s) for graph data processing. Recently, graph attention network (GAT) has proven a promising attempt by combining graph neural networks with attention mechanism, so as to achieve massage passing in graphs with arbitrary structures. However, the attention in GAT is computed mainly based on the similarity between the node content, while the structures of the graph remains largely unemployed (except in masking the attention out of one-hop neighbors). In this paper, we propose an `````````````````````````````*ADaptive Structural Fingerprint* (ADSF) model to fully exploit both topological details of the graph and content features of the nodes. The key idea is to contextualize each node with a weighted, learnable receptive field encoding rich and diverse local graph structures. By doing this, structural interactions between the nodes can be inferred accurately, thus improving subsequent attention layer as well as the convergence of learning. Furthermore, our model provides a useful platform for different subspaces of node features and various scales of graph structures to ``cross-talk** with each other through the learning of multi-head attention, being particularly useful in handling complex real-world data. Encouraging performance is observed on a number of benchmark data sets in node classification.
This paper extends the idea of self-attention in graph NNs, which is typically based on feature similarity between nodes, to include also structural similarity. This is done by computing for each node a value for each other node within its receptive field, calculated based on some distance metric from the node (either a Gaussian decay profile, or a learned weighting of number of hops distance, of based on fixed point of a random walk with restart). When evaluating the attention between two nodes, a function that compares the structural values between the two nodes (based on Jacard similarity) gives a score to the two nodes, which is further used for calculating the attention weight between them. I found the idea to be elegant and well-explained, and overall the paper is well-written. Using the structural similarity makes a lot of sense, and the proposed method is both easy to implement, and flexible — the structural similarity profile can be learned, which seems important for getting this idea to work in practice. I wonder what happens if one uses only the structural similarity for the attention (without the feature similarity). Are there datasets where this would be sufficient? Even a toy task which is constructed such that the structure is more informative than the features could be a nice way to further demonstrate the idea of the paper. The experiments show a clear yet small advantage to the proposed method over the conventional attention method (GAT). Overall, this seems to be a solid contribution (even if the empirical results are a bit incremental) and I recommend acceptance.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdaptive Structural Fingerprints for Graph Attention Networks\nMany real-world data sets are represented as graphs, such as citation links, social media, and biological interaction. The volatile graph structure makes it non-trivial to employ convolutional neural networks (CNN*s) for graph data processing. Recently, graph attention network (GAT) has proven a promising attempt by combining graph neural networks with attention mechanism, so as to achieve massage passing in graphs with arbitrary structures. However, the attention in GAT is computed mainly based on the similarity between the node content, while the structures of the graph remains largely unemployed (except in masking the attention out of one-hop neighbors). In this paper, we propose an `````````````````````````````*ADaptive Structural Fingerprint* (ADSF) model to fully exploit both topological details of the graph and content features of the nodes. The key idea is to contextualize each node with a weighted, learnable receptive field encoding rich and diverse local graph structures. By doing this, structural interactions between the nodes can be inferred accurately, thus improving subsequent attention layer as well as the convergence of learning. Furthermore, our model provides a useful platform for different subspaces of node features and various scales of graph structures to ``cross-talk** with each other through the learning of multi-head attention, being particularly useful in handling complex real-world data. Encouraging performance is observed on a number of benchmark data sets in node classification.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper extends the idea of self-attention in graph NNs, which is typically based on feature similarity between nodes, to include also structural similarity. This is done by computing for each node a value for each other node within its receptive field, calculated based on some distance metric from the node (either a Gaussian decay profile, or a learned weighting of number of hops distance, of based on fixed point of a random walk with restart). When evaluating the attention between two nodes, a function that compares the structural values between the two nodes (based on Jacard similarity) gives a score to the two nodes, which is further used for calculating the attention weight between them. I found the idea to be elegant and well-explained, and overall the paper is well-written. Using the structural similarity makes a lot of sense, and the proposed method is both easy to implement, and flexible — the structural similarity profile can be learned, which seems important for getting this idea to work in practice. I wonder what happens if one uses only the structural similarity for the attention (without the feature similarity). Are there datasets where this would be sufficient? Even a toy task which is constructed such that the structure is more informative than the features could be a nice way to further demonstrate the idea of the paper. The experiments show a clear yet small advantage to the proposed method over the conventional attention method (GAT). Overall, this seems to be a solid contribution (even if the empirical results are a bit incremental) and I recommend acceptance.", "role": "assistant" } ]
[ "This paper extends the idea of self-attention in graph NNs, which is typically based on feature similarity between nodes, to include also structural similarity.", "This is done by computing for each node a value for each other node within its receptive field, calculated based on some distance metric from the node (either a Gaussian decay profile, or a learned weighting of number of hops distance, of based on fixed point of a random walk with restart).", "When evaluating the attention between two nodes, a function that compares the structural values between the two nodes (based on Jacard similarity) gives a score to the two nodes, which is further used for calculating the attention weight between them.", "I found the idea to be elegant and well-explained, and overall the paper is well-written.", "Using the structural similarity makes a lot of sense, and the proposed method is both easy to implement, and flexible — the structural similarity profile can be learned, which seems important for getting this idea to work in practice.", "I wonder what happens if one uses only the structural similarity for the attention (without the feature similarity).", "Are there datasets where this would be sufficient?", "Even a toy task which is constructed such that the structure is more informative than the features could be a nice way to further demonstrate the idea of the paper.", "The experiments show a clear yet small advantage to the proposed method over the conventional attention method (GAT).", "Overall, this seems to be a solid contribution (even if the empirical results are a bit incremental) and I recommend acceptance." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 5, "praise": 5, "presentation_and_reporting": 2, "results_and_discussion": 4, "suggestion_and_solution": 3, "total": 10 }
0
0
0.3
0.5
0.5
0.2
0.4
0.3
10
0
0
1
1
1
1
1
1
2.2
2.057958
0.142042
AMRL: Aggregated Memory For Reinforcement Learning
In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy. We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration. Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs. Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order. We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time. Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters.
# UPDATE after rebuttal I have changed my score to 8 to reflect the clarifications in the new draft. Summary: This paper presents a family of architectural variants for the recurrent part of an RL controller. The primary claim is that simple components which compute elementwise sum/average/max over the activations seen over time are highly robust to noisy observations (as encountered with many RL environments), as detailed with various empirical and theoretical analyses. By themselves these aggregators are incapable of storing order dependent information, but by combining an LSTM with an aggregator, and pushing half the LSTM activations through the aggregator, and concatenating the the other half of the activations with the aggregator output, the resulting output contains order dependent and order independent content. The motivation is very clear (many of the most challenging modern video games used as RL environments clearly have noisy observations, and many timesteps for which no new useful information is observed) and the related work is comprehensive. An increase in training speed and stability, apparently without any major caveats, would be of great interest to any practitioner of Deep Reinforcement Learning. The experiments provided are good, and vary nicely between actual RL runs and theoretical analysis, all of which convinces me that this could well become a standard Deep RL component. I do have a range of questions & requests for clarification (see below) but I believe the experiments as presented, plus some additions before camera ready, will make for a good paper of wide interest. Desicion: Accept. I would give this a 7 if I was able to. The idea is very simple (indeed I find it slightly hard to believe no one has tried this before, but I don*t know of any references myself) and the results, particularly Figure 4, are compelling. It*s nice to see a very approachable more mathematical analysis as well, it would be good to see more papers proposing new neural components with this kind of rigour. I look forward to trying this approach myself, post publication. Discussion: AMRL-Avg coming out the best in Figure 8 makes a lot of sense to me, as I can see how average provides a stable signal of unordered information. One thing that really doesn*t make sense to me is why Max and Sum would also be good - obviously their SNR / Jacobians are quite similar, but fundamentally there is a risk of huge values being built up in activations (especially with Sum), which at least have the potential to cause numerical instability, provide weird gradient magnitudes (which then mess up moment tracking optimizers like Adam, etc). Thre does not seem to be any mention of numerical stability, or whether any considerations ned to be taken for this. Maybe we could hope that *well behaved* internal neron activations are zero mean, so the average aggregator will never get too big - but is this always the case, at all points in training, in every episode? I appreciate the straight through estimator might ameliorate this, but it is not made entirely clear to me in the text that this is the reason for using it. Addressing this point would increase the strength of the argument. Given that DNC was determined to be the strongest baseline in a few of the metrics, and that AMRL combines these aggregators with LSTM, an experiment that I*m surprised is missing would be to make AMRL containing a DNC instead of an LSTM. Is there a reason why this wasn*t attempted? It would have been good to see a wider set of RL experiments - the Atari suite is well studied, easily available, and there are many open source implementations to which AMRL could easily be slotted into (eg IMPALA, openai baselines, etc). The action space for Minecraft is not spelled out clearly - at first I assumed this was the recent project Malmo release, which I assume to have continuous actions (or at least, the ability to move in directions that are not compass points), but while Malmo is mentioned in the paper, the appendix implies that the action space is (north, east, west) etc. I*m aware that there is precedent in the literature (Oh et al 2016) for *gridworld minecraft*, but I think it would improve the paper to at least acknowledge this in the main text, as I feel even most researchers in 2019 would read the text and assume the game to be analogous to VizDoom / DeepMind Lab, when it really isn*t. Note that most likely the proposed method would give an even bigger boost with *real* minecraft, as there are even more non-informative observations between the interesting ones, and furthermore I think the environment choice made in this paper is fine, as it still demonstrates the point. A single additional sentence should suffice. Minor points: NTM / DNC were not *designed with RL in mind* per se, the original NTM paper had no RL and the DNC paper contained both supervised and RL results. Potentially the statement at the bottom of page 1 was supposed to refer mainly to stacked LSTMs - either way I feel it would be better to slightly soften the statement to *...but also for stacked LSTMs and DNCs (cite), which have been widely applied in RL.* In the RL problem definition, the observation function and the state transition function are stated as (presumably?) probability distributions with the range [0,1], but for continuous state / observation spaces it is entirely possible for the probability density at a point to exceed 1. Also in the RL section - the notation of should probably also contain the sequence of actions taken throughout the trajectory - this is made clear in the text, but not in the set notation. The square bracket *slicing* notation used for slicing and concatenating is neat, but even though I spend all day writing python it still took me a while to realise what this meant, as I haven*t seen this used in typeset maths before. Introducing this notation (as well as the pipe for concatenation) at first usage would help avoid tripping up readers unnecessarily. Note that this pdf caused several office printers I tried it on to choke on later pages, and I get slow scrolling in Chrome on figures such as figure 11 - possibly the graphics are too high resolution (?) typos: A5, feed forward section: *later* should be *layer* I believe.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAMRL: Aggregated Memory For Reinforcement Learning\nIn many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy. We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration. Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs. Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order. We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time. Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n# UPDATE after rebuttal I have changed my score to 8 to reflect the clarifications in the new draft. Summary: This paper presents a family of architectural variants for the recurrent part of an RL controller. The primary claim is that simple components which compute elementwise sum/average/max over the activations seen over time are highly robust to noisy observations (as encountered with many RL environments), as detailed with various empirical and theoretical analyses. By themselves these aggregators are incapable of storing order dependent information, but by combining an LSTM with an aggregator, and pushing half the LSTM activations through the aggregator, and concatenating the the other half of the activations with the aggregator output, the resulting output contains order dependent and order independent content. The motivation is very clear (many of the most challenging modern video games used as RL environments clearly have noisy observations, and many timesteps for which no new useful information is observed) and the related work is comprehensive. An increase in training speed and stability, apparently without any major caveats, would be of great interest to any practitioner of Deep Reinforcement Learning. The experiments provided are good, and vary nicely between actual RL runs and theoretical analysis, all of which convinces me that this could well become a standard Deep RL component. I do have a range of questions & requests for clarification (see below) but I believe the experiments as presented, plus some additions before camera ready, will make for a good paper of wide interest. Desicion: Accept. I would give this a 7 if I was able to. The idea is very simple (indeed I find it slightly hard to believe no one has tried this before, but I don*t know of any references myself) and the results, particularly Figure 4, are compelling. It*s nice to see a very approachable more mathematical analysis as well, it would be good to see more papers proposing new neural components with this kind of rigour. I look forward to trying this approach myself, post publication. Discussion: AMRL-Avg coming out the best in Figure 8 makes a lot of sense to me, as I can see how average provides a stable signal of unordered information. One thing that really doesn*t make sense to me is why Max and Sum would also be good - obviously their SNR / Jacobians are quite similar, but fundamentally there is a risk of huge values being built up in activations (especially with Sum), which at least have the potential to cause numerical instability, provide weird gradient magnitudes (which then mess up moment tracking optimizers like Adam, etc). Thre does not seem to be any mention of numerical stability, or whether any considerations ned to be taken for this. Maybe we could hope that *well behaved* internal neron activations are zero mean, so the average aggregator will never get too big - but is this always the case, at all points in training, in every episode? I appreciate the straight through estimator might ameliorate this, but it is not made entirely clear to me in the text that this is the reason for using it. Addressing this point would increase the strength of the argument. Given that DNC was determined to be the strongest baseline in a few of the metrics, and that AMRL combines these aggregators with LSTM, an experiment that I*m surprised is missing would be to make AMRL containing a DNC instead of an LSTM. Is there a reason why this wasn*t attempted? It would have been good to see a wider set of RL experiments - the Atari suite is well studied, easily available, and there are many open source implementations to which AMRL could easily be slotted into (eg IMPALA, openai baselines, etc). The action space for Minecraft is not spelled out clearly - at first I assumed this was the recent project Malmo release, which I assume to have continuous actions (or at least, the ability to move in directions that are not compass points), but while Malmo is mentioned in the paper, the appendix implies that the action space is (north, east, west) etc. I*m aware that there is precedent in the literature (Oh et al 2016) for *gridworld minecraft*, but I think it would improve the paper to at least acknowledge this in the main text, as I feel even most researchers in 2019 would read the text and assume the game to be analogous to VizDoom / DeepMind Lab, when it really isn*t. Note that most likely the proposed method would give an even bigger boost with *real* minecraft, as there are even more non-informative observations between the interesting ones, and furthermore I think the environment choice made in this paper is fine, as it still demonstrates the point. A single additional sentence should suffice. Minor points: NTM / DNC were not *designed with RL in mind* per se, the original NTM paper had no RL and the DNC paper contained both supervised and RL results. Potentially the statement at the bottom of page 1 was supposed to refer mainly to stacked LSTMs - either way I feel it would be better to slightly soften the statement to *...but also for stacked LSTMs and DNCs (cite), which have been widely applied in RL.* In the RL problem definition, the observation function and the state transition function are stated as (presumably?) probability distributions with the range [0,1], but for continuous state / observation spaces it is entirely possible for the probability density at a point to exceed 1. Also in the RL section - the notation of should probably also contain the sequence of actions taken throughout the trajectory - this is made clear in the text, but not in the set notation. The square bracket *slicing* notation used for slicing and concatenating is neat, but even though I spend all day writing python it still took me a while to realise what this meant, as I haven*t seen this used in typeset maths before. Introducing this notation (as well as the pipe for concatenation) at first usage would help avoid tripping up readers unnecessarily. Note that this pdf caused several office printers I tried it on to choke on later pages, and I get slow scrolling in Chrome on figures such as figure 11 - possibly the graphics are too high resolution (?) typos: A5, feed forward section: *later* should be *layer* I believe.", "role": "assistant" } ]
[ "# UPDATE after rebuttal I have changed my score to 8 to reflect the clarifications in the new draft.", "Summary: This paper presents a family of architectural variants for the recurrent part of an RL controller.", "The primary claim is that simple components which compute elementwise sum/average/max over the activations seen over time are highly robust to noisy observations (as encountered with many RL environments), as detailed with various empirical and theoretical analyses.", "By themselves these aggregators are incapable of storing order dependent information, but by combining an LSTM with an aggregator, and pushing half the LSTM activations through the aggregator, and concatenating the the other half of the activations with the aggregator output, the resulting output contains order dependent and order independent content.", "The motivation is very clear (many of the most challenging modern video games used as RL environments clearly have noisy observations, and many timesteps for which no new useful information is observed) and the related work is comprehensive.", "An increase in training speed and stability, apparently without any major caveats, would be of great interest to any practitioner of Deep Reinforcement Learning.", "The experiments provided are good, and vary nicely between actual RL runs and theoretical analysis, all of which convinces me that this could well become a standard Deep RL component.", "I do have a range of questions & requests for clarification (see below) but I believe the experiments as presented, plus some additions before camera ready, will make for a good paper of wide interest.", "Desicion: Accept.", "I would give this a 7 if I was able to.", "The idea is very simple (indeed I find it slightly hard to believe no one has tried this before, but I don*t know of any references myself) and the results, particularly Figure 4, are compelling.", "It*s nice to see a very approachable more mathematical analysis as well, it would be good to see more papers proposing new neural components with this kind of rigour.", "I look forward to trying this approach myself, post publication.", "Discussion: AMRL-Avg coming out the best in Figure 8 makes a lot of sense to me, as I can see how average provides a stable signal of unordered information.", "One thing that really doesn*t make sense to me is why Max and Sum would also be good - obviously their SNR / Jacobians are quite similar, but fundamentally there is a risk of huge values being built up in activations (especially with Sum), which at least have the potential to cause numerical instability, provide weird gradient magnitudes (which then mess up moment tracking optimizers like Adam, etc).", "Thre does not seem to be any mention of numerical stability, or whether any considerations ned to be taken for this.", "Maybe we could hope that *well behaved* internal neron activations are zero mean, so the average aggregator will never get too big - but is this always the case, at all points in training, in every episode?", "I appreciate the straight through estimator might ameliorate this, but it is not made entirely clear to me in the text that this is the reason for using it.", "Addressing this point would increase the strength of the argument.", "Given that DNC was determined to be the strongest baseline in a few of the metrics, and that AMRL combines these aggregators with LSTM, an experiment that I*m surprised is missing would be to make AMRL containing a DNC instead of an LSTM.", "Is there a reason why this wasn*t attempted?", "It would have been good to see a wider set of RL experiments - the Atari suite is well studied, easily available, and there are many open source implementations to which AMRL could easily be slotted into (eg IMPALA, openai baselines, etc).", "The action space for Minecraft is not spelled out clearly - at first I assumed this was the recent project Malmo release, which I assume to have continuous actions (or at least, the ability to move in directions that are not compass points), but while Malmo is mentioned in the paper, the appendix implies that the action space is (north, east, west) etc.", "I*m aware that there is precedent in the literature (Oh et al 2016) for *gridworld minecraft*, but I think it would improve the paper to at least acknowledge this in the main text, as I feel even most researchers in 2019 would read the text and assume the game to be analogous to VizDoom / DeepMind Lab, when it really isn*t. Note that most likely the proposed method would give an even bigger boost with *real* minecraft, as there are even more non-informative observations between the interesting ones, and furthermore I think the environment choice made in this paper is fine, as it still demonstrates the point.", "A single additional sentence should suffice.", "Minor points: NTM / DNC were not *designed with RL in mind* per se, the original NTM paper had no RL and the DNC paper contained both supervised and RL results.", "Potentially the statement at the bottom of page 1 was supposed to refer mainly to stacked LSTMs - either way I feel it would be better to slightly soften the statement to *...but also for stacked LSTMs and DNCs (cite), which have been widely applied in RL.", "* In the RL problem definition, the observation function and the state transition function are stated as (presumably?)", "probability distributions with the range [0,1], but for continuous state / observation spaces it is entirely possible for the probability density at a point to exceed 1.", "Also in the RL section - the notation of should probably also contain the sequence of actions taken throughout the trajectory - this is made clear in the text, but not in the set notation.", "The square bracket *slicing* notation used for slicing and concatenating is neat, but even though I spend all day writing python it still took me a while to realise what this meant, as I haven*t seen this used in typeset maths before.", "Introducing this notation (as well as the pipe for concatenation) at first usage would help avoid tripping up readers unnecessarily.", "Note that this pdf caused several office printers I tried it on to choke on later pages, and I get slow scrolling in Chrome on figures such as figure 11 - possibly the graphics are too high resolution (?)", "typos: A5, feed forward section: *later* should be *layer* I believe." ]
{ "criticism": 8, "example": 3, "importance_and_relevance": 4, "materials_and_methods": 15, "praise": 9, "presentation_and_reporting": 12, "results_and_discussion": 9, "suggestion_and_solution": 13, "total": 34 }
0.235294
0.088235
0.117647
0.441176
0.264706
0.352941
0.264706
0.382353
34
1
1
1
1
1
1
1
1
2.147059
-4.812993
6.960052
Adversarially robust transfer learning
Transfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly. When the goal is to produce a model that is not only accurate but also adversarially robust, data scarcity and computational limitations become even more cumbersome. We consider robust transfer learning, in which we transfer not only performance but also robustness from a source model to a target domain. We start by observing that robust networks contain robust feature extractors. By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks. We then consider the case of *fine tuning* a network by re-training end-to-end in the target domain. When using lifelong learning strategies, this process preserves the robustness of the source network while achieving high accuracy. By using such strategies, it is possible to produce accurate and robust models with little data, and without the cost of adversarial training. Additionally, we can improve the generalization of adversarially trained models, while maintaining their robustness.
Summary ------- This paper addresses the problem of performing robust transfer learning. A first contribution of the paper is to robust and classic training with respect to usual validation accuracy and robustness to adversarial attacks on the CIFAR task. Then, the same comparison is made on a transfer learning task. The transfer learning setting is then completed by studying transfer from ImageNet-based models with a particular attention to low-data regime and training deeper networks on top of the feature extractor. An analysis of robust features is provided and finally the authors studies the interest of Learning without Forgetting strategies to provide robust transfer. The tendency s to obtain the Best performance from robust-trained source models having a good validation accuracy. Overall ------ The paper presents a study of robust transfer learning that can be interesting for practitioners to know the type of results that can be obtained by robust transfer learning. However, I feel that the results obtained are rather expected and the paper does not provide some interesting methodological contribution that could help to develop robust transfer training. Comments --------- The results obtained in Section 3, 4 and 6 are rather expected and similar. I think that the paper could benefit by reducing these 3 sections in only one section where the results obtained can be summarized in one big table and two or three figures for example - the complete set of results can then be reported in the supplementary section. Then, if the contribution of the paper is to propose to focus on robust transfer learning including a Learning without Forgetting strategy, the authors should then focus more on this part and analyze better the behavior of learning. In particular, the combination between distillation and robust training is certainly interesting, and trying to propose a methodological framework for doing robust training in this context would certainly result in a more significant contribution. How to constrain the feature extraction layers, how to make use of them with distillation and additionally what are the additional contraints/additions that can be made to learning problem (3) to improve robust transfer are some important questions. So far, the contribution appears to me rather limited for ICLR. If we restrict to the part related to experimental comparisons made, they are restricted to particular trainings and datasets with specific PGD attacks. The contribution would have been stronger is different types of adversarial attacks with different parameters have been studied and analyzed.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdversarially robust transfer learning\nTransfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly. When the goal is to produce a model that is not only accurate but also adversarially robust, data scarcity and computational limitations become even more cumbersome. We consider robust transfer learning, in which we transfer not only performance but also robustness from a source model to a target domain. We start by observing that robust networks contain robust feature extractors. By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks. We then consider the case of *fine tuning* a network by re-training end-to-end in the target domain. When using lifelong learning strategies, this process preserves the robustness of the source network while achieving high accuracy. By using such strategies, it is possible to produce accurate and robust models with little data, and without the cost of adversarial training. Additionally, we can improve the generalization of adversarially trained models, while maintaining their robustness.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary ------- This paper addresses the problem of performing robust transfer learning. A first contribution of the paper is to robust and classic training with respect to usual validation accuracy and robustness to adversarial attacks on the CIFAR task. Then, the same comparison is made on a transfer learning task. The transfer learning setting is then completed by studying transfer from ImageNet-based models with a particular attention to low-data regime and training deeper networks on top of the feature extractor. An analysis of robust features is provided and finally the authors studies the interest of Learning without Forgetting strategies to provide robust transfer. The tendency s to obtain the Best performance from robust-trained source models having a good validation accuracy. Overall ------ The paper presents a study of robust transfer learning that can be interesting for practitioners to know the type of results that can be obtained by robust transfer learning. However, I feel that the results obtained are rather expected and the paper does not provide some interesting methodological contribution that could help to develop robust transfer training. Comments --------- The results obtained in Section 3, 4 and 6 are rather expected and similar. I think that the paper could benefit by reducing these 3 sections in only one section where the results obtained can be summarized in one big table and two or three figures for example - the complete set of results can then be reported in the supplementary section. Then, if the contribution of the paper is to propose to focus on robust transfer learning including a Learning without Forgetting strategy, the authors should then focus more on this part and analyze better the behavior of learning. In particular, the combination between distillation and robust training is certainly interesting, and trying to propose a methodological framework for doing robust training in this context would certainly result in a more significant contribution. How to constrain the feature extraction layers, how to make use of them with distillation and additionally what are the additional contraints/additions that can be made to learning problem (3) to improve robust transfer are some important questions. So far, the contribution appears to me rather limited for ICLR. If we restrict to the part related to experimental comparisons made, they are restricted to particular trainings and datasets with specific PGD attacks. The contribution would have been stronger is different types of adversarial attacks with different parameters have been studied and analyzed.", "role": "assistant" } ]
[ "Summary ------- This paper addresses the problem of performing robust transfer learning.", "A first contribution of the paper is to robust and classic training with respect to usual validation accuracy and robustness to adversarial attacks on the CIFAR task.", "Then, the same comparison is made on a transfer learning task.", "The transfer learning setting is then completed by studying transfer from ImageNet-based models with a particular attention to low-data regime and training deeper networks on top of the feature extractor.", "An analysis of robust features is provided and finally the authors studies the interest of Learning without Forgetting strategies to provide robust transfer.", "The tendency s to obtain the Best performance from robust-trained source models having a good validation accuracy.", "Overall ------ The paper presents a study of robust transfer learning that can be interesting for practitioners to know the type of results that can be obtained by robust transfer learning.", "However, I feel that the results obtained are rather expected and the paper does not provide some interesting methodological contribution that could help to develop robust transfer training.", "Comments --------- The results obtained in Section 3, 4 and 6 are rather expected and similar.", "I think that the paper could benefit by reducing these 3 sections in only one section where the results obtained can be summarized in one big table and two or three figures for example - the complete set of results can then be reported in the supplementary section.", "Then, if the contribution of the paper is to propose to focus on robust transfer learning including a Learning without Forgetting strategy, the authors should then focus more on this part and analyze better the behavior of learning.", "In particular, the combination between distillation and robust training is certainly interesting, and trying to propose a methodological framework for doing robust training in this context would certainly result in a more significant contribution.", "How to constrain the feature extraction layers, how to make use of them with distillation and additionally what are the additional contraints/additions that can be made to learning problem (3) to improve robust transfer are some important questions.", "So far, the contribution appears to me rather limited for ICLR.", "If we restrict to the part related to experimental comparisons made, they are restricted to particular trainings and datasets with specific PGD attacks.", "The contribution would have been stronger is different types of adversarial attacks with different parameters have been studied and analyzed." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 8, "materials_and_methods": 10, "praise": 5, "presentation_and_reporting": 1, "results_and_discussion": 5, "suggestion_and_solution": 4, "total": 16 }
0.1875
0
0.5
0.625
0.3125
0.0625
0.3125
0.25
16
1
0
1
1
1
1
1
1
2.25
2.107958
0.142042
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
Few-shot classification refers to learning a classifier for new classes given only a few examples. While a plethora of models have emerged to tackle it, we find the procedure and datasets that are used to assess their progress lacking. To address this limitation, we propose Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks. We experiment with popular baselines and meta-learners on Meta-Dataset, along with a competitive method that we propose. We analyze performance as a function of various characteristics of test tasks and examine the models’ ability to leverage diverse training sources for improving their generalization. We also propose a new set of baselines for quantifying the benefit of meta-learning in Meta-Dataset. Our extensive experimentation has uncovered important research challenges and we hope to inspire work in these directions.
This paper proposed a really interesting direction for few-shot and meta-learning, the concept of a *meta-dataset*, which can be used for more realistic evaluation of algorithms. The main contributions are: 1) A more realistic environment for training and testing few-shot learners. 2) Experimental evaluation of popular models 3) Analyses of whether different models benefit from more data, 4) A new meta-learner I think this work is an interesting empirical paper which should be supported by solid experimental results. My concern about this paper in its current form is that the layout/structure of the paper needs to be improved, for example: Considering putting some of the key results in the appendix section in the main text Removal of repeating results from the main text by shortening the early sections
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMeta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples\nFew-shot classification refers to learning a classifier for new classes given only a few examples. While a plethora of models have emerged to tackle it, we find the procedure and datasets that are used to assess their progress lacking. To address this limitation, we propose Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks. We experiment with popular baselines and meta-learners on Meta-Dataset, along with a competitive method that we propose. We analyze performance as a function of various characteristics of test tasks and examine the models’ ability to leverage diverse training sources for improving their generalization. We also propose a new set of baselines for quantifying the benefit of meta-learning in Meta-Dataset. Our extensive experimentation has uncovered important research challenges and we hope to inspire work in these directions.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposed a really interesting direction for few-shot and meta-learning, the concept of a *meta-dataset*, which can be used for more realistic evaluation of algorithms. The main contributions are: 1) A more realistic environment for training and testing few-shot learners. 2) Experimental evaluation of popular models 3) Analyses of whether different models benefit from more data, 4) A new meta-learner I think this work is an interesting empirical paper which should be supported by solid experimental results. My concern about this paper in its current form is that the layout/structure of the paper needs to be improved, for example: Considering putting some of the key results in the appendix section in the main text Removal of repeating results from the main text by shortening the early sections", "role": "assistant" } ]
[ "This paper proposed a really interesting direction for few-shot and meta-learning, the concept of a *meta-dataset*, which can be used for more realistic evaluation of algorithms.", "The main contributions are: 1) A more realistic environment for training and testing few-shot learners.", "2) Experimental evaluation of popular models 3) Analyses of whether different models benefit from more data, 4) A new meta-learner I think this work is an interesting empirical paper which should be supported by solid experimental results.", "My concern about this paper in its current form is that the layout/structure of the paper needs to be improved, for example: Considering putting some of the key results in the appendix section in the main text Removal of repeating results from the main text by shortening the early sections" ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 3, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 3, "total": 4 }
0
0
0.75
0.75
0.5
0.25
0.5
0.75
4
0
0
1
1
1
1
1
1
3.5
2.221623
1.278377
CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning
A variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success. This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others* success, and credit-assignment for interactions between actions and goals of different agents. To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment. We use a function augmentation scheme to bridge value and policy functions across the curriculum. The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment.
This paper presents a method for adding credit assignment to multi-agent RL and proposes a way of adding a curriculum to the training process. The best heuristics and structures to incorporate in the modeling, learning, and exploration parts of multi-agent RL are still largely unknown and this paper explores some reasonable new ones. In most of the tasks this is evaluated on in Figure 5 this approach adds a slight improvement to the SOTA and I think an exciting direction of future work is to continue pushing on multi-agent RL in even more complex envirnoments where the SOTA break down in even worse ways.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning\nA variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success. This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others* success, and credit-assignment for interactions between actions and goals of different agents. To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment. We use a function augmentation scheme to bridge value and policy functions across the curriculum. The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a method for adding credit assignment to multi-agent RL and proposes a way of adding a curriculum to the training process. The best heuristics and structures to incorporate in the modeling, learning, and exploration parts of multi-agent RL are still largely unknown and this paper explores some reasonable new ones. In most of the tasks this is evaluated on in Figure 5 this approach adds a slight improvement to the SOTA and I think an exciting direction of future work is to continue pushing on multi-agent RL in even more complex envirnoments where the SOTA break down in even worse ways.", "role": "assistant" } ]
[ "This paper presents a method for adding credit assignment to multi-agent RL and proposes a way of adding a curriculum to the training process.", "The best heuristics and structures to incorporate in the modeling, learning, and exploration parts of multi-agent RL are still largely unknown and this paper explores some reasonable new ones.", "In most of the tasks this is evaluated on in Figure 5 this approach adds a slight improvement to the SOTA and I think an exciting direction of future work is to continue pushing on multi-agent RL in even more complex envirnoments where the SOTA break down in even worse ways." ]
{ "criticism": 1, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 3, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 0, "suggestion_and_solution": 0, "total": 3 }
0.333333
0.333333
0.666667
1
0.333333
0
0
0
3
1
1
1
1
1
0
0
0
2.666667
1.088424
1.578243
Learning Space Partitions for Nearest Neighbor Search
Space partitions of underlie a vast and important class of fast nearest neighbor search (NNS) algorithms. Inspired by recent theoretical work on NNS for general metric spaces (Andoni et al. 2018b,c), we develop a new framework for building space partitions reducing the problem to balanced graph partitioning followed by supervised classification. We instantiate this general approach with the KaHIP graph partitioner (Sanders and Schulz 2013) and neural networks, respectively, to obtain a new partitioning procedure called Neural Locality-Sensitive Hashing (Neural LSH). On several standard benchmarks for NNS (Aumuller et al. 2017), our experiments show that the partitions obtained by Neural LSH consistently outperform partitions found by quantization-based and tree-based methods as well as classic, data-oblivious LSH.
The authors propose a learnable space partitioning nearest-neighbour algorithm where the model learns a hierarchical space partition of the data (using graph partitioning) and learns to predict which buckets a query will reside in (using a learnable model). The paper is an interesting combination of classical optimization and deep learning. The authors make a concerted effort to compare to a range of space-partitioning-based nearest neighbour algorithms, however crucially none of these algorithms are state-of-the-art in comparison to some of the best PQ NNS or hierarchical Navigable Small World Graphs. In terms of writing, the model description requires significant overhaul. I had to re-read it several times and sketch out what exactly you are doing. I would recommend an algorithm box to summarize both the hierarchical graph partitioning *and* the learnable query network. 4.3 *additional experiments* is some of the most interesting parts of the paper, I would recommend promoting these experimental results to the main paper. I would recommend accepting this paper because I think it may lead to state-of-the-art NNS algorithms with theoretical guarantees, especially with a more powerful query network.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Space Partitions for Nearest Neighbor Search\nSpace partitions of underlie a vast and important class of fast nearest neighbor search (NNS) algorithms. Inspired by recent theoretical work on NNS for general metric spaces (Andoni et al. 2018b,c), we develop a new framework for building space partitions reducing the problem to balanced graph partitioning followed by supervised classification. We instantiate this general approach with the KaHIP graph partitioner (Sanders and Schulz 2013) and neural networks, respectively, to obtain a new partitioning procedure called Neural Locality-Sensitive Hashing (Neural LSH). On several standard benchmarks for NNS (Aumuller et al. 2017), our experiments show that the partitions obtained by Neural LSH consistently outperform partitions found by quantization-based and tree-based methods as well as classic, data-oblivious LSH.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a learnable space partitioning nearest-neighbour algorithm where the model learns a hierarchical space partition of the data (using graph partitioning) and learns to predict which buckets a query will reside in (using a learnable model). The paper is an interesting combination of classical optimization and deep learning. The authors make a concerted effort to compare to a range of space-partitioning-based nearest neighbour algorithms, however crucially none of these algorithms are state-of-the-art in comparison to some of the best PQ NNS or hierarchical Navigable Small World Graphs. In terms of writing, the model description requires significant overhaul. I had to re-read it several times and sketch out what exactly you are doing. I would recommend an algorithm box to summarize both the hierarchical graph partitioning *and* the learnable query network. 4.3 *additional experiments* is some of the most interesting parts of the paper, I would recommend promoting these experimental results to the main paper. I would recommend accepting this paper because I think it may lead to state-of-the-art NNS algorithms with theoretical guarantees, especially with a more powerful query network.", "role": "assistant" } ]
[ "The authors propose a learnable space partitioning nearest-neighbour algorithm where the model learns a hierarchical space partition of the data (using graph partitioning) and learns to predict which buckets a query will reside in (using a learnable model).", "The paper is an interesting combination of classical optimization and deep learning.", "The authors make a concerted effort to compare to a range of space-partitioning-based nearest neighbour algorithms, however crucially none of these algorithms are state-of-the-art in comparison to some of the best PQ NNS or hierarchical Navigable Small World Graphs.", "In terms of writing, the model description requires significant overhaul.", "I had to re-read it several times and sketch out what exactly you are doing.", "I would recommend an algorithm box to summarize both the hierarchical graph partitioning *and* the learnable query network.", "4.3 *additional experiments* is some of the most interesting parts of the paper, I would recommend promoting these experimental results to the main paper.", "I would recommend accepting this paper because I think it may lead to state-of-the-art NNS algorithms with theoretical guarantees, especially with a more powerful query network." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 5, "total": 8 }
0
0
0.25
0.625
0.25
0.375
0.125
0.625
8
0
0
1
1
1
1
1
1
2.25
1.855439
0.394561
Locality and Compositionality in Zero-Shot Learning
In this work we study locality and compositionality in the context of learning representations for Zero Shot Learning (ZSL). In order to well-isolate the importance of these properties in learned representations, we impose the additional constraint that, differently from most recent work in ZSL, no pre-training on different datasets (e.g. ImageNet) is performed. The results of our experiment show how locality, in terms of small parts of the input, and compositionality, i.e. how well can the learned representations be expressed as a function of a smaller vocabulary, are both deeply related to generalization and motivate the focus on more local-aware models in future research directions for representation learning.
UPDATE: My recommendation has been borderline because the discussion of the paper about the nature of locality and compositionality seems to be less in-depth than I would have expected, but if the authors will revise the submission to shift the focus of the paper to more focused on analysis and evaluation and weaken the claims that their model exhibits locality and compositionality, I would lean towards an accept as the empirical evaluation is extensive. ---- Summary: This paper aims to investigate the role of locality and compositionality in zero-shot learning. The authors propose a novel evaluation setup that differs from the original zero-shot learning framework in that the model is not allowed to be pretrained on another dataset. The authors propose several methods for learning and visualizing local representations. The overall problem the paper tackles is quite ambitious which involve many parts: defining locality and compositionality, defining constraints for enforcing locality and compositionality, evaluating the existence of locality and compositionality, and evaluating model performance. Because of the extensive nature of the research problem, the coverage of each piece of this research problem in this paper could have been more thorough: for example, I would be really excited for a more in-depth analysis on methods for enforcing locality, but the authors consider only one method, which is the auxiliary loss, and this method does not seem to be applicable for datasets that do not have attribute labels. Given the formalism in section 2.1, I would furthermore be excited about a more in-depth analysis on methods for enforcing compositionality, but the authors only investigate compositionality from the perspective of a weighted average. As a result, the claims about whether enforcing locality and compositionality helps with generalization seem weak, as the authors only do not seem to consider different ways of achieving locality and compositionality -- it is not clear whether their method for enforcing locality (with auxiliary losses) or compositionality (by explicitly performing a weighted average) is representative of enforcing locality and compositionality in general. Furthermore, locality seems to be more of a statement about *ignoring* information (discussed below), which the authors do not explore. As a result, my recommendation is borderline. Perhaps the paper could benefit by reframing it with a more specific focus, with more emphasis on the evaluation, which the paper seems to do well. I would recommend that the authors either (1) perform a more extensive analysis of methods for enforcing locality and compositionality or (2) shift the focus of the paper to more focused on analysis and evaluation and weaken the claims that their model exhibits locality and compositionality, as there is not enough evidence to tell whether their method for enforcing locality and compositionality is the best way of doing so. Strengths: - The paper is well-motivated and tackles an important problem. - The empirical evaluation is extensive. - The framework of not relying on pretrained features is a novel contribution. Weaknesses: - There does not seem to be a comparison between a compositional model and a non-compositional model in section 5.3. Would the authors be able to provide such an analysis? Otherwise it is difficult to tell exactly how and to what extent compositionality helps. - I would have assumed that enforcing locality should actually be a problem of *ignoring* information, rather than *predicting* labeled information. However, the auxiliary loss introduction in section 4.1 only seems to enforce that local features (which by default are local from the CNN) are predictive of attributes/class, rather than enforcing that these features ignore information that is not local to the particular image patch they are modeling. Would the authors be able to comment on this point, as well as provide an empirical analysis of how *unpredictive* the features of their model are on image patches elsewhere?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLocality and Compositionality in Zero-Shot Learning\nIn this work we study locality and compositionality in the context of learning representations for Zero Shot Learning (ZSL). In order to well-isolate the importance of these properties in learned representations, we impose the additional constraint that, differently from most recent work in ZSL, no pre-training on different datasets (e.g. ImageNet) is performed. The results of our experiment show how locality, in terms of small parts of the input, and compositionality, i.e. how well can the learned representations be expressed as a function of a smaller vocabulary, are both deeply related to generalization and motivate the focus on more local-aware models in future research directions for representation learning.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nUPDATE: My recommendation has been borderline because the discussion of the paper about the nature of locality and compositionality seems to be less in-depth than I would have expected, but if the authors will revise the submission to shift the focus of the paper to more focused on analysis and evaluation and weaken the claims that their model exhibits locality and compositionality, I would lean towards an accept as the empirical evaluation is extensive. ---- Summary: This paper aims to investigate the role of locality and compositionality in zero-shot learning. The authors propose a novel evaluation setup that differs from the original zero-shot learning framework in that the model is not allowed to be pretrained on another dataset. The authors propose several methods for learning and visualizing local representations. The overall problem the paper tackles is quite ambitious which involve many parts: defining locality and compositionality, defining constraints for enforcing locality and compositionality, evaluating the existence of locality and compositionality, and evaluating model performance. Because of the extensive nature of the research problem, the coverage of each piece of this research problem in this paper could have been more thorough: for example, I would be really excited for a more in-depth analysis on methods for enforcing locality, but the authors consider only one method, which is the auxiliary loss, and this method does not seem to be applicable for datasets that do not have attribute labels. Given the formalism in section 2.1, I would furthermore be excited about a more in-depth analysis on methods for enforcing compositionality, but the authors only investigate compositionality from the perspective of a weighted average. As a result, the claims about whether enforcing locality and compositionality helps with generalization seem weak, as the authors only do not seem to consider different ways of achieving locality and compositionality -- it is not clear whether their method for enforcing locality (with auxiliary losses) or compositionality (by explicitly performing a weighted average) is representative of enforcing locality and compositionality in general. Furthermore, locality seems to be more of a statement about *ignoring* information (discussed below), which the authors do not explore. As a result, my recommendation is borderline. Perhaps the paper could benefit by reframing it with a more specific focus, with more emphasis on the evaluation, which the paper seems to do well. I would recommend that the authors either (1) perform a more extensive analysis of methods for enforcing locality and compositionality or (2) shift the focus of the paper to more focused on analysis and evaluation and weaken the claims that their model exhibits locality and compositionality, as there is not enough evidence to tell whether their method for enforcing locality and compositionality is the best way of doing so. Strengths: - The paper is well-motivated and tackles an important problem. - The empirical evaluation is extensive. - The framework of not relying on pretrained features is a novel contribution. Weaknesses: - There does not seem to be a comparison between a compositional model and a non-compositional model in section 5.3. Would the authors be able to provide such an analysis? Otherwise it is difficult to tell exactly how and to what extent compositionality helps. - I would have assumed that enforcing locality should actually be a problem of *ignoring* information, rather than *predicting* labeled information. However, the auxiliary loss introduction in section 4.1 only seems to enforce that local features (which by default are local from the CNN) are predictive of attributes/class, rather than enforcing that these features ignore information that is not local to the particular image patch they are modeling. Would the authors be able to comment on this point, as well as provide an empirical analysis of how *unpredictive* the features of their model are on image patches elsewhere?", "role": "assistant" } ]
[ "UPDATE: My recommendation has been borderline because the discussion of the paper about the nature of locality and compositionality seems to be less in-depth than I would have expected, but if the authors will revise the submission to shift the focus of the paper to more focused on analysis and evaluation and weaken the claims that their model exhibits locality and compositionality, I would lean towards an accept as the empirical evaluation is extensive.", "---- Summary: This paper aims to investigate the role of locality and compositionality in zero-shot learning.", "The authors propose a novel evaluation setup that differs from the original zero-shot learning framework in that the model is not allowed to be pretrained on another dataset.", "The authors propose several methods for learning and visualizing local representations.", "The overall problem the paper tackles is quite ambitious which involve many parts: defining locality and compositionality, defining constraints for enforcing locality and compositionality, evaluating the existence of locality and compositionality, and evaluating model performance.", "Because of the extensive nature of the research problem, the coverage of each piece of this research problem in this paper could have been more thorough: for example, I would be really excited for a more in-depth analysis on methods for enforcing locality, but the authors consider only one method, which is the auxiliary loss, and this method does not seem to be applicable for datasets that do not have attribute labels.", "Given the formalism in section 2.1, I would furthermore be excited about a more in-depth analysis on methods for enforcing compositionality, but the authors only investigate compositionality from the perspective of a weighted average.", "As a result, the claims about whether enforcing locality and compositionality helps with generalization seem weak, as the authors only do not seem to consider different ways of achieving locality and compositionality -- it is not clear whether their method for enforcing locality (with auxiliary losses) or compositionality (by explicitly performing a weighted average) is representative of enforcing locality and compositionality in general.", "Furthermore, locality seems to be more of a statement about *ignoring* information (discussed below), which the authors do not explore.", "As a result, my recommendation is borderline.", "Perhaps the paper could benefit by reframing it with a more specific focus, with more emphasis on the evaluation, which the paper seems to do well.", "I would recommend that the authors either (1) perform a more extensive analysis of methods for enforcing locality and compositionality or (2) shift the focus of the paper to more focused on analysis and evaluation and weaken the claims that their model exhibits locality and compositionality, as there is not enough evidence to tell whether their method for enforcing locality and compositionality is the best way of doing so.", "Strengths: - The paper is well-motivated and tackles an important problem.", "- The empirical evaluation is extensive.", "- The framework of not relying on pretrained features is a novel contribution.", "Weaknesses: - There does not seem to be a comparison between a compositional model and a non-compositional model in section 5.3.", "Would the authors be able to provide such an analysis?", "Otherwise it is difficult to tell exactly how and to what extent compositionality helps.", "- I would have assumed that enforcing locality should actually be a problem of *ignoring* information, rather than *predicting* labeled information.", "However, the auxiliary loss introduction in section 4.1 only seems to enforce that local features (which by default are local from the CNN) are predictive of attributes/class, rather than enforcing that these features ignore information that is not local to the particular image patch they are modeling.", "Would the authors be able to comment on this point, as well as provide an empirical analysis of how *unpredictive* the features of their model are on image patches elsewhere?" ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 15, "praise": 4, "presentation_and_reporting": 3, "results_and_discussion": 4, "suggestion_and_solution": 8, "total": 21 }
0.333333
0
0.238095
0.714286
0.190476
0.142857
0.190476
0.380952
21
1
0
1
1
1
1
1
1
2.190476
1.180401
1.010076
MetaPix: Few-Shot Video Retargeting
We address the task of unsupervised retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task.
This paper proposes a novel and interesting task that learn to retarget human actions with few-shot samples. The overall pipeline is built by applying meta-learning strategy on pre-trained retargeting module. It follows a conditional generator and discriminator structure that leverages few-shot frames to retarget the action of source video. The approach is technically sound. The evaluations are compared with two baseline methods, Pix2PixHD and Posewarp. Their evaluations are satisfactory and convincing, the results demonstrate some improvements over baseline models regarding both the selected metrics and the visualizing performance. Though the proposed problem is novel and somewhat interesting, there are also several weaknesses of this work: - The novelty of methodology is somewhat limited. It is more about merging several state-of-the-art modules in different tasks to tackle the few-shot retargeting problem. Though efforts may be needed to make the pipeline work, the overall contribution is not significant. - The improvement obtained with proposed MetaPix module is not significant in few-shot setting according to Table 1. Additionally, could the authors provide some visualizing results for different K number, which would be interesting for analysis.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMetaPix: Few-Shot Video Retargeting\nWe address the task of unsupervised retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a novel and interesting task that learn to retarget human actions with few-shot samples. The overall pipeline is built by applying meta-learning strategy on pre-trained retargeting module. It follows a conditional generator and discriminator structure that leverages few-shot frames to retarget the action of source video. The approach is technically sound. The evaluations are compared with two baseline methods, Pix2PixHD and Posewarp. Their evaluations are satisfactory and convincing, the results demonstrate some improvements over baseline models regarding both the selected metrics and the visualizing performance. Though the proposed problem is novel and somewhat interesting, there are also several weaknesses of this work: - The novelty of methodology is somewhat limited. It is more about merging several state-of-the-art modules in different tasks to tackle the few-shot retargeting problem. Though efforts may be needed to make the pipeline work, the overall contribution is not significant. - The improvement obtained with proposed MetaPix module is not significant in few-shot setting according to Table 1. Additionally, could the authors provide some visualizing results for different K number, which would be interesting for analysis.", "role": "assistant" } ]
[ "This paper proposes a novel and interesting task that learn to retarget human actions with few-shot samples.", "The overall pipeline is built by applying meta-learning strategy on pre-trained retargeting module.", "It follows a conditional generator and discriminator structure that leverages few-shot frames to retarget the action of source video.", "The approach is technically sound.", "The evaluations are compared with two baseline methods, Pix2PixHD and Posewarp.", "Their evaluations are satisfactory and convincing, the results demonstrate some improvements over baseline models regarding both the selected metrics and the visualizing performance.", "Though the proposed problem is novel and somewhat interesting, there are also several weaknesses of this work: - The novelty of methodology is somewhat limited.", "It is more about merging several state-of-the-art modules in different tasks to tackle the few-shot retargeting problem.", "Though efforts may be needed to make the pipeline work, the overall contribution is not significant.", "- The improvement obtained with proposed MetaPix module is not significant in few-shot setting according to Table 1.", "Additionally, could the authors provide some visualizing results for different K number, which would be interesting for analysis." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 10, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 11 }
0.272727
0
0.272727
0.909091
0.363636
0.181818
0.181818
0.090909
11
1
0
1
1
1
1
1
1
2.272727
2.209598
0.06313
Improving Neural Language Generation with Spectrum Control
Recent Transformer-based models such as Transformer-XL and BERT have achieved huge success on various natural language processing tasks. However, contextualized embeddings at the output layer of these powerful models tend to degenerate and occupy an anisotropic cone in the vector space, which is called the representation degeneration problem. In this paper, we propose a novel spectrum control approach to address this degeneration problem. The core idea of our method is to directly guide the spectra training of the output embedding matrix with a slow-decaying singular value prior distribution through a reparameterization framework. We show that our proposed method encourages isotropy of the learned word representations while maintains the modeling power of these contextual neural models. We further provide a theoretical analysis and insight on the benefit of modeling singular value distribution. We demonstrate that our spectrum control method outperforms the state-of-the-art Transformer-XL modeling for language model, and various Transformer-based models for machine translation, on common benchmark datasets for these tasks.
Authors propose to apply Spectrum control regularization to the embedding of weight matrices in NLP problems such as language modeling and neural machine translation. Spectrum Control Regularization was originally proposed and applied to GANs (Jiang et al 2019) The author motivate the approach by showing that the singular values of embedding weight matrices, although I am not convinced that it is such a big issue. In terms of experimental results authors show a very slight improvement over strong baseline models, that further shows an evidence that regularization singular values of embedding matrices is not very important. Overall the paper is written well, however the contribution is very marginal.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nImproving Neural Language Generation with Spectrum Control\nRecent Transformer-based models such as Transformer-XL and BERT have achieved huge success on various natural language processing tasks. However, contextualized embeddings at the output layer of these powerful models tend to degenerate and occupy an anisotropic cone in the vector space, which is called the representation degeneration problem. In this paper, we propose a novel spectrum control approach to address this degeneration problem. The core idea of our method is to directly guide the spectra training of the output embedding matrix with a slow-decaying singular value prior distribution through a reparameterization framework. We show that our proposed method encourages isotropy of the learned word representations while maintains the modeling power of these contextual neural models. We further provide a theoretical analysis and insight on the benefit of modeling singular value distribution. We demonstrate that our spectrum control method outperforms the state-of-the-art Transformer-XL modeling for language model, and various Transformer-based models for machine translation, on common benchmark datasets for these tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nAuthors propose to apply Spectrum control regularization to the embedding of weight matrices in NLP problems such as language modeling and neural machine translation. Spectrum Control Regularization was originally proposed and applied to GANs (Jiang et al 2019) The author motivate the approach by showing that the singular values of embedding weight matrices, although I am not convinced that it is such a big issue. In terms of experimental results authors show a very slight improvement over strong baseline models, that further shows an evidence that regularization singular values of embedding matrices is not very important. Overall the paper is written well, however the contribution is very marginal.", "role": "assistant" } ]
[ "Authors propose to apply Spectrum control regularization to the embedding of weight matrices in NLP problems such as language modeling and neural machine translation.", "Spectrum Control Regularization was originally proposed and applied to GANs (Jiang et al 2019) The author motivate the approach by showing that the singular values of embedding weight matrices, although I am not convinced that it is such a big issue.", "In terms of experimental results authors show a very slight improvement over strong baseline models, that further shows an evidence that regularization singular values of embedding matrices is not very important.", "Overall the paper is written well, however the contribution is very marginal." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 3, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 4 }
0.75
0
0.5
0.75
0.25
0.25
0.25
0
4
1
0
1
1
1
1
1
0
2.75
1.471623
1.278377
Neural Module Networks for Reasoning over Text
Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations. Neural module networks (NMNs) learn to parse such questions as executable programs composed of learnable modules, performing well on synthetic visual QA domains. However, we find that it is challenging to learn these models for non-synthetic questions on open-domain text, where a model needs to deal with the diversity of natural language and perform a broader range of reasoning. We extend NMNs by: (a) introducing modules that reason over a paragraph of text, performing symbolic reasoning (such as arithmetic, sorting, counting) over numbers and dates in a probabilistic and differentiable manner; and (b) proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text. Additionally, we show that a limited amount of heuristically-obtained question program and intermediate module output supervision provides sufficient inductive bias for accurate learning. Our proposed model significantly outperforms state-of-the-art models on a subset of the DROP dataset that poses a variety of reasoning challenges that are covered by our modules.
This paper proposes a model and a training framework for question answering which requires compositional reasoning over the input text, by building executable neural modules and training based on additional auxiliary supervision signals. I really like this paper and the approach taken: tackling complex QA tasks is an important topic and current state-of-the-art methods rely heavily on lexical similarities as mentioned in the paper. I think learning differentiable programs is a good direction to address this space. However in IMHO, the paper as it stands is premature for publication, the primary reason being the lack of strong experimental evidence that where the strength of this approach is compared to other methods compared. To be specific, the results in Table 2 are very close between MTMSN and the BERT-based model proposed and it*s not clear if the difference is because (1) the model is generally better; (2) this is a subset of the dataset that this model performs better; (3) this is because of the additional supervision signals provided (e.g. the results of Fig 2a without the aux-sup is almost the same as MTMSN) and if we provided similar auxiliary supervision for other models they would equally do well; (4) due to lack of reporting variance and error-bars across runs we see a small increase which may not be significant; ... Again, the paper is very interesting, but I don*t think it*s clear and thorough to experimentally prove that the overall approach is working better.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Module Networks for Reasoning over Text\nAnswering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations. Neural module networks (NMNs) learn to parse such questions as executable programs composed of learnable modules, performing well on synthetic visual QA domains. However, we find that it is challenging to learn these models for non-synthetic questions on open-domain text, where a model needs to deal with the diversity of natural language and perform a broader range of reasoning. We extend NMNs by: (a) introducing modules that reason over a paragraph of text, performing symbolic reasoning (such as arithmetic, sorting, counting) over numbers and dates in a probabilistic and differentiable manner; and (b) proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text. Additionally, we show that a limited amount of heuristically-obtained question program and intermediate module output supervision provides sufficient inductive bias for accurate learning. Our proposed model significantly outperforms state-of-the-art models on a subset of the DROP dataset that poses a variety of reasoning challenges that are covered by our modules.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a model and a training framework for question answering which requires compositional reasoning over the input text, by building executable neural modules and training based on additional auxiliary supervision signals. I really like this paper and the approach taken: tackling complex QA tasks is an important topic and current state-of-the-art methods rely heavily on lexical similarities as mentioned in the paper. I think learning differentiable programs is a good direction to address this space. However in IMHO, the paper as it stands is premature for publication, the primary reason being the lack of strong experimental evidence that where the strength of this approach is compared to other methods compared. To be specific, the results in Table 2 are very close between MTMSN and the BERT-based model proposed and it*s not clear if the difference is because (1) the model is generally better; (2) this is a subset of the dataset that this model performs better; (3) this is because of the additional supervision signals provided (e.g. the results of Fig 2a without the aux-sup is almost the same as MTMSN) and if we provided similar auxiliary supervision for other models they would equally do well; (4) due to lack of reporting variance and error-bars across runs we see a small increase which may not be significant; ... Again, the paper is very interesting, but I don*t think it*s clear and thorough to experimentally prove that the overall approach is working better.", "role": "assistant" } ]
[ "This paper proposes a model and a training framework for question answering which requires compositional reasoning over the input text, by building executable neural modules and training based on additional auxiliary supervision signals.", "I really like this paper and the approach taken: tackling complex QA tasks is an important topic and current state-of-the-art methods rely heavily on lexical similarities as mentioned in the paper.", "I think learning differentiable programs is a good direction to address this space.", "However in IMHO, the paper as it stands is premature for publication, the primary reason being the lack of strong experimental evidence that where the strength of this approach is compared to other methods compared.", "To be specific, the results in Table 2 are very close between MTMSN and the BERT-based model proposed and it*s not clear if the difference is because (1) the model is generally better; (2) this is a subset of the dataset that this model performs better; (3) this is because of the additional supervision signals provided (e.g.", "the results of Fig 2a without the aux-sup is almost the same as MTMSN) and if we provided similar auxiliary supervision for other models they would equally do well; (4) due to lack of reporting variance and error-bars across runs we see a small increase which may not be significant; ... Again, the paper is very interesting, but I don*t think it*s clear and thorough to experimentally prove that the overall approach is working better." ]
{ "criticism": 3, "example": 2, "importance_and_relevance": 3, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 0, "total": 6 }
0.5
0.333333
0.5
0.833333
0.5
0.166667
0.333333
0
6
1
1
1
1
1
1
1
0
3.166667
2.393328
0.773339
Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling
For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN) that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework. VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator. We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance. This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion. By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks. PyTorch code is provided.
This paper proposes a combined architecture for image-text modeling. Though the proposed architecture is extremely detailed, the authors explain clearly the overarching concepts and methods used, within limited space. The experimental results are extremely strong, especially on sub-domains where conditional generative models have historically struggled such as images with angular, global features - often mechanical or human constructed objects. *Computers* and *cars* images in Figure 2 show this quite clearly. The model also functions for tagging and annotating images - performing well compared to models designed *only* for this task. The authors have done a commendable job adding detail, further analysis, and experiments in the appendix of the paper. Combined with the included code release, this paper should be of interest to many. My chief criticisms come for the density of the paper - while it is difficult to dilute such a complex model to 8 pages, and the included appendix clarifies many questions in the text body, it would be worth further passes through the main paper with a specific focus on clarity and brevity, to aid in the accessibility of this work. As usual, more experiments are always welcome, and given the strengths of GAN based generators for faces a text based facial image generator could have been a great addition. The existing experiments are more than sufficient for proof-of-concept though. Finally, though this version of the paper includes code directly in a google drive link it would be ideal for the final version to reference a github code link - again to aid access to interested individuals. Being able to read the code online, without downloading and opening locally can be nice, along with other benefits from open source release. However the authors should release the code however they see fit, this is more of a personal preference on the part of this reviewer. To improve my score, the primary changes would be more editing and re-writing, focused on clarity and brevity of the text in the core paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVariational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling\nFor bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN) that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework. VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator. We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance. This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion. By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks. PyTorch code is provided.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a combined architecture for image-text modeling. Though the proposed architecture is extremely detailed, the authors explain clearly the overarching concepts and methods used, within limited space. The experimental results are extremely strong, especially on sub-domains where conditional generative models have historically struggled such as images with angular, global features - often mechanical or human constructed objects. *Computers* and *cars* images in Figure 2 show this quite clearly. The model also functions for tagging and annotating images - performing well compared to models designed *only* for this task. The authors have done a commendable job adding detail, further analysis, and experiments in the appendix of the paper. Combined with the included code release, this paper should be of interest to many. My chief criticisms come for the density of the paper - while it is difficult to dilute such a complex model to 8 pages, and the included appendix clarifies many questions in the text body, it would be worth further passes through the main paper with a specific focus on clarity and brevity, to aid in the accessibility of this work. As usual, more experiments are always welcome, and given the strengths of GAN based generators for faces a text based facial image generator could have been a great addition. The existing experiments are more than sufficient for proof-of-concept though. Finally, though this version of the paper includes code directly in a google drive link it would be ideal for the final version to reference a github code link - again to aid access to interested individuals. Being able to read the code online, without downloading and opening locally can be nice, along with other benefits from open source release. However the authors should release the code however they see fit, this is more of a personal preference on the part of this reviewer. To improve my score, the primary changes would be more editing and re-writing, focused on clarity and brevity of the text in the core paper.", "role": "assistant" } ]
[ "This paper proposes a combined architecture for image-text modeling.", "Though the proposed architecture is extremely detailed, the authors explain clearly the overarching concepts and methods used, within limited space.", "The experimental results are extremely strong, especially on sub-domains where conditional generative models have historically struggled such as images with angular, global features - often mechanical or human constructed objects.", "*Computers* and *cars* images in Figure 2 show this quite clearly.", "The model also functions for tagging and annotating images - performing well compared to models designed *only* for this task.", "The authors have done a commendable job adding detail, further analysis, and experiments in the appendix of the paper.", "Combined with the included code release, this paper should be of interest to many.", "My chief criticisms come for the density of the paper - while it is difficult to dilute such a complex model to 8 pages, and the included appendix clarifies many questions in the text body, it would be worth further passes through the main paper with a specific focus on clarity and brevity, to aid in the accessibility of this work.", "As usual, more experiments are always welcome, and given the strengths of GAN based generators for faces a text based facial image generator could have been a great addition.", "The existing experiments are more than sufficient for proof-of-concept though.", "Finally, though this version of the paper includes code directly in a google drive link it would be ideal for the final version to reference a github code link - again to aid access to interested individuals.", "Being able to read the code online, without downloading and opening locally can be nice, along with other benefits from open source release.", "However the authors should release the code however they see fit, this is more of a personal preference on the part of this reviewer.", "To improve my score, the primary changes would be more editing and re-writing, focused on clarity and brevity of the text in the core paper." ]
{ "criticism": 0, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 10, "praise": 6, "presentation_and_reporting": 7, "results_and_discussion": 1, "suggestion_and_solution": 7, "total": 14 }
0
0.071429
0.071429
0.714286
0.428571
0.5
0.071429
0.5
14
0
1
1
1
1
1
1
1
2.357143
2.34136
0.015782
BERTScore: Evaluating Text Generation with BERT
We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics.
This paper presents a simple application of BERT-based contextual embeddings for evaluation of text generation models such as machine translation and image caption generation. An extensive set of experiments have been carried out to show that the proposed BERTScore metric achieves better correlation with human judgments than the existing metrics. Overall, the paper is well-written and the motivations are clear. However, I am not sure about the technical novelty of the paper as the proposed approach is a natural application of BERT along with traditional cosine similarity measures and precision, recall, F1-based computations, and simple IDF-based importance weighting. Other comments: - It would be interesting to see how the proposed metric performs to evaluate paraphrase generation and text simplification models as the models need to follow specific constraints such as semantic equivalence, novelty, simplicity etc. with respect to source and reference sentences. - Another limitation of the proposed metric is memory and time complexity as it takes relatively more time to evaluate the sentences compared to BLEU, as authors acknowledged in Section 5.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBERTScore: Evaluating Text Generation with BERT\nWe propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a simple application of BERT-based contextual embeddings for evaluation of text generation models such as machine translation and image caption generation. An extensive set of experiments have been carried out to show that the proposed BERTScore metric achieves better correlation with human judgments than the existing metrics. Overall, the paper is well-written and the motivations are clear. However, I am not sure about the technical novelty of the paper as the proposed approach is a natural application of BERT along with traditional cosine similarity measures and precision, recall, F1-based computations, and simple IDF-based importance weighting. Other comments: - It would be interesting to see how the proposed metric performs to evaluate paraphrase generation and text simplification models as the models need to follow specific constraints such as semantic equivalence, novelty, simplicity etc. with respect to source and reference sentences. - Another limitation of the proposed metric is memory and time complexity as it takes relatively more time to evaluate the sentences compared to BLEU, as authors acknowledged in Section 5.", "role": "assistant" } ]
[ "This paper presents a simple application of BERT-based contextual embeddings for evaluation of text generation models such as machine translation and image caption generation.", "An extensive set of experiments have been carried out to show that the proposed BERTScore metric achieves better correlation with human judgments than the existing metrics.", "Overall, the paper is well-written and the motivations are clear.", "However, I am not sure about the technical novelty of the paper as the proposed approach is a natural application of BERT along with traditional cosine similarity measures and precision, recall, F1-based computations, and simple IDF-based importance weighting.", "Other comments: - It would be interesting to see how the proposed metric performs to evaluate paraphrase generation and text simplification models as the models need to follow specific constraints such as semantic equivalence, novelty, simplicity etc.", "with respect to source and reference sentences.", "- Another limitation of the proposed metric is memory and time complexity as it takes relatively more time to evaluate the sentences compared to BLEU, as authors acknowledged in Section 5." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 7 }
0.285714
0
0.285714
0.714286
0.285714
0.428571
0.142857
0.142857
7
1
0
1
1
1
1
1
1
2.285714
1.717547
0.568167
Capsules with Inverted Dot-Product Attention Routing
We introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent*s state and the child*s vote. Unlike previously proposed routing algorithms, the parent*s ability to reconstruct the child is not explicitly taken into account to update the routing probabilities. This simplifies the routing procedure and improves performance on benchmark datasets such as CIFAR-10 and CIFAR-100. The new mechanism 1) designs routing via inverted dot-product attention; 2) imposes Layer Normalization as normalization; and 3) replaces sequential iterative routing with concurrent iterative routing. Besides outperforming existing capsule networks, our model performs at-par with a powerful CNN (ResNet-18), using less than 25% of the parameters. On a different task of recognizing digits from overlayed digit images, the proposed capsule model performs favorably against CNNs given the same number of layers and neurons per layer. We believe that our work raises the possibility of applying capsule networks to complex real-world tasks.
This paper presents a new simpler routing mechanism for capsule networks and achieves good performance on real world data sets making use of this new capsule structure along with a restnet backbone. Strong performance on the cifar10 and cifar100 datasets are presented and the network outperforms earlier versions of capsule networks. This new structure also performs well on an augmented MNIST dataset of overlapping digits (similar to the one used by Sabour et al 2017). Overall the paper is well written and presents solid results. The paper also presents a thorough comparison of two earlier versions of capsules which is a worthwhile contribution in its own right. The paper could be improved by clearing up a few ambiguities: - is the learning rate schedule the same for all three models? in figure 4 it looks like the learning rate is decayed at two distinct points for your model, but only one distinct point for both the EM and Dynamic routing models. -*Notably, the prediction becomes random guess when the iteration number increases to 5.* this sentence is a little confusing. Do you mean when the iteration number the performance is equivalent to not random assignments? - This new algorithm requires that the capsules in L+1 have initialized poses with which to compare agreement between the poses in L. This is initial value seems like it may greatly effect the performance of the model. In the paper it is set to 0 and not expanded upon. It would be interesting to see if randomizing this value, or learning a bias for it would effect performance. -unlike the two previous versions of capsules, the inverted dot product capsules show in figure 4 sudden huge decreases in test accuracy while training. These moments seem to be overcome quite quickly and the model ends up outperforming the other two. But it would be worth mentioning this behavior and perhaps attempting to explain it.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCapsules with Inverted Dot-Product Attention Routing\nWe introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent*s state and the child*s vote. Unlike previously proposed routing algorithms, the parent*s ability to reconstruct the child is not explicitly taken into account to update the routing probabilities. This simplifies the routing procedure and improves performance on benchmark datasets such as CIFAR-10 and CIFAR-100. The new mechanism 1) designs routing via inverted dot-product attention; 2) imposes Layer Normalization as normalization; and 3) replaces sequential iterative routing with concurrent iterative routing. Besides outperforming existing capsule networks, our model performs at-par with a powerful CNN (ResNet-18), using less than 25% of the parameters. On a different task of recognizing digits from overlayed digit images, the proposed capsule model performs favorably against CNNs given the same number of layers and neurons per layer. We believe that our work raises the possibility of applying capsule networks to complex real-world tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a new simpler routing mechanism for capsule networks and achieves good performance on real world data sets making use of this new capsule structure along with a restnet backbone. Strong performance on the cifar10 and cifar100 datasets are presented and the network outperforms earlier versions of capsule networks. This new structure also performs well on an augmented MNIST dataset of overlapping digits (similar to the one used by Sabour et al 2017). Overall the paper is well written and presents solid results. The paper also presents a thorough comparison of two earlier versions of capsules which is a worthwhile contribution in its own right. The paper could be improved by clearing up a few ambiguities: - is the learning rate schedule the same for all three models? in figure 4 it looks like the learning rate is decayed at two distinct points for your model, but only one distinct point for both the EM and Dynamic routing models. -*Notably, the prediction becomes random guess when the iteration number increases to 5.* this sentence is a little confusing. Do you mean when the iteration number the performance is equivalent to not random assignments? - This new algorithm requires that the capsules in L+1 have initialized poses with which to compare agreement between the poses in L. This is initial value seems like it may greatly effect the performance of the model. In the paper it is set to 0 and not expanded upon. It would be interesting to see if randomizing this value, or learning a bias for it would effect performance. -unlike the two previous versions of capsules, the inverted dot product capsules show in figure 4 sudden huge decreases in test accuracy while training. These moments seem to be overcome quite quickly and the model ends up outperforming the other two. But it would be worth mentioning this behavior and perhaps attempting to explain it.", "role": "assistant" } ]
[ "This paper presents a new simpler routing mechanism for capsule networks and achieves good performance on real world data sets making use of this new capsule structure along with a restnet backbone.", "Strong performance on the cifar10 and cifar100 datasets are presented and the network outperforms earlier versions of capsule networks.", "This new structure also performs well on an augmented MNIST dataset of overlapping digits (similar to the one used by Sabour et al 2017).", "Overall the paper is well written and presents solid results.", "The paper also presents a thorough comparison of two earlier versions of capsules which is a worthwhile contribution in its own right.", "The paper could be improved by clearing up a few ambiguities: - is the learning rate schedule the same for all three models?", "in figure 4 it looks like the learning rate is decayed at two distinct points for your model, but only one distinct point for both the EM and Dynamic routing models.", "-*Notably, the prediction becomes random guess when the iteration number increases to 5.", "* this sentence is a little confusing.", "Do you mean when the iteration number the performance is equivalent to not random assignments?", "- This new algorithm requires that the capsules in L+1 have initialized poses with which to compare agreement between the poses in L. This is initial value seems like it may greatly effect the performance of the model.", "In the paper it is set to 0 and not expanded upon.", "It would be interesting to see if randomizing this value, or learning a bias for it would effect performance.", "-unlike the two previous versions of capsules, the inverted dot product capsules show in figure 4 sudden huge decreases in test accuracy while training.", "These moments seem to be overcome quite quickly and the model ends up outperforming the other two.", "But it would be worth mentioning this behavior and perhaps attempting to explain it." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 9, "praise": 5, "presentation_and_reporting": 5, "results_and_discussion": 7, "suggestion_and_solution": 3, "total": 16 }
0.125
0.0625
0.125
0.5625
0.3125
0.3125
0.4375
0.1875
16
1
1
1
1
1
1
1
1
2.125
1.982958
0.142042
The Local Elasticity of Neural Networks
This paper presents a phenomenon in neural networks that we refer to as local elasticity. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector x* is not significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector x that is dissimilar to x* in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on synthetic datasets, whereas this is not the case for linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel (Jacot et al., 2018). Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with K-means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn confirms the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss implications of local elasticity to shed light on several intriguing aspects of deep neural networks.
This paper studies an interesting phenomenon in neural network models that the classifier*s prediction at a one input will not be significantly perturbed after the classifier is updated via sgd at another input that is dissimilar from the former one. This phenomenon is termed as the local elasticity, which provides another perspective seeking to interpret the neural networks. They present that this local elasticity characteristic does not hold for linear models. To further investigate this property, the paper introduces the relative similarity and kernelized similarity based on which a k-means like clustering algorithm is developed to further find fine-grained clusters within a coarse-grained category, like dogs and cats from the mammal category. Interpreting neural networks is a hot research topic, and a paper advancing knowledge in this area is certainly welcome. The paper is well presented (with a small typo in the definition of S_ker(x,x)). In the experiments, it will be interesting to further investigate how the local elastic property changes with large batch size in that large batch size may encourage more diversity of the examples in a batch. Furthermore, it will be even more interesting to explore how these similarities can improve the performance of a simple k-nearest neighbor classifier.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Local Elasticity of Neural Networks\nThis paper presents a phenomenon in neural networks that we refer to as local elasticity. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector x* is not significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector x that is dissimilar to x* in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on synthetic datasets, whereas this is not the case for linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel (Jacot et al., 2018). Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with K-means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn confirms the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss implications of local elasticity to shed light on several intriguing aspects of deep neural networks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper studies an interesting phenomenon in neural network models that the classifier*s prediction at a one input will not be significantly perturbed after the classifier is updated via sgd at another input that is dissimilar from the former one. This phenomenon is termed as the local elasticity, which provides another perspective seeking to interpret the neural networks. They present that this local elasticity characteristic does not hold for linear models. To further investigate this property, the paper introduces the relative similarity and kernelized similarity based on which a k-means like clustering algorithm is developed to further find fine-grained clusters within a coarse-grained category, like dogs and cats from the mammal category. Interpreting neural networks is a hot research topic, and a paper advancing knowledge in this area is certainly welcome. The paper is well presented (with a small typo in the definition of S_ker(x,x)). In the experiments, it will be interesting to further investigate how the local elastic property changes with large batch size in that large batch size may encourage more diversity of the examples in a batch. Furthermore, it will be even more interesting to explore how these similarities can improve the performance of a simple k-nearest neighbor classifier.", "role": "assistant" } ]
[ "This paper studies an interesting phenomenon in neural network models that the classifier*s prediction at a one input will not be significantly perturbed after the classifier is updated via sgd at another input that is dissimilar from the former one.", "This phenomenon is termed as the local elasticity, which provides another perspective seeking to interpret the neural networks.", "They present that this local elasticity characteristic does not hold for linear models.", "To further investigate this property, the paper introduces the relative similarity and kernelized similarity based on which a k-means like clustering algorithm is developed to further find fine-grained clusters within a coarse-grained category, like dogs and cats from the mammal category.", "Interpreting neural networks is a hot research topic, and a paper advancing knowledge in this area is certainly welcome.", "The paper is well presented (with a small typo in the definition of S_ker(x,x)).", "In the experiments, it will be interesting to further investigate how the local elastic property changes with large batch size in that large batch size may encourage more diversity of the examples in a batch.", "Furthermore, it will be even more interesting to explore how these similarities can improve the performance of a simple k-nearest neighbor classifier." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 2, "results_and_discussion": 0, "suggestion_and_solution": 2, "total": 8 }
0.125
0
0.625
0.625
0.375
0.25
0
0.25
8
1
0
1
1
1
1
0
1
2.25
1.855439
0.394561
The Local Elasticity of Neural Networks
This paper presents a phenomenon in neural networks that we refer to as local elasticity. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector x* is not significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector x that is dissimilar to x* in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on synthetic datasets, whereas this is not the case for linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel (Jacot et al., 2018). Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with K-means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn confirms the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss implications of local elasticity to shed light on several intriguing aspects of deep neural networks.
The paper contributes to the understanding of neural networks and provides a new clustering technique: 1) The paper introduces the interesting notion of local elasticity which considers the relative variation in output values for two different inputs before and after an SGD-style update; 2) The derived similarity metric between input samples (obtained by as SGD unfolds) can be used for clustering and is amenable to a kernelized formulation; 3) Empirical measurements on visual classification tasks show that the new similarity metric can offer a better clustering performance than PCA + K-means. I found the paper very interesting but the writing appeared somewhat unclear at times. I believe that some rewriting is needed for the authors to argue that the newly introduced elasticity metric provides a significantly new understanding of neural networks. In particular, I did not find the argumentation around explaining generalization to be very convincing or clear. The kernelized formulation of the elasticity metric seems compelling and I found that turning the insights developed by the theoretical section of the paper into an actionable algorithm for clustering was a nice contribution. Unfortunately, the empirical results did not really convince me that the resulting clustering algorithm really improves on the SOTA in clustering as only relatively weak baselines were considered. I believe that considering more solid baselines for the clustering experiments would help.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Local Elasticity of Neural Networks\nThis paper presents a phenomenon in neural networks that we refer to as local elasticity. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector x* is not significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector x that is dissimilar to x* in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on synthetic datasets, whereas this is not the case for linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel (Jacot et al., 2018). Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with K-means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn confirms the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss implications of local elasticity to shed light on several intriguing aspects of deep neural networks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper contributes to the understanding of neural networks and provides a new clustering technique: 1) The paper introduces the interesting notion of local elasticity which considers the relative variation in output values for two different inputs before and after an SGD-style update; 2) The derived similarity metric between input samples (obtained by as SGD unfolds) can be used for clustering and is amenable to a kernelized formulation; 3) Empirical measurements on visual classification tasks show that the new similarity metric can offer a better clustering performance than PCA + K-means. I found the paper very interesting but the writing appeared somewhat unclear at times. I believe that some rewriting is needed for the authors to argue that the newly introduced elasticity metric provides a significantly new understanding of neural networks. In particular, I did not find the argumentation around explaining generalization to be very convincing or clear. The kernelized formulation of the elasticity metric seems compelling and I found that turning the insights developed by the theoretical section of the paper into an actionable algorithm for clustering was a nice contribution. Unfortunately, the empirical results did not really convince me that the resulting clustering algorithm really improves on the SOTA in clustering as only relatively weak baselines were considered. I believe that considering more solid baselines for the clustering experiments would help.", "role": "assistant" } ]
[ "The paper contributes to the understanding of neural networks and provides a new clustering technique: 1) The paper introduces the interesting notion of local elasticity which considers the relative variation in output values for two different inputs before and after an SGD-style update; 2) The derived similarity metric between input samples (obtained by as SGD unfolds) can be used for clustering and is amenable to a kernelized formulation; 3) Empirical measurements on visual classification tasks show that the new similarity metric can offer a better clustering performance than PCA + K-means.", "I found the paper very interesting but the writing appeared somewhat unclear at times.", "I believe that some rewriting is needed for the authors to argue that the newly introduced elasticity metric provides a significantly new understanding of neural networks.", "In particular, I did not find the argumentation around explaining generalization to be very convincing or clear.", "The kernelized formulation of the elasticity metric seems compelling and I found that turning the insights developed by the theoretical section of the paper into an actionable algorithm for clustering was a nice contribution.", "Unfortunately, the empirical results did not really convince me that the resulting clustering algorithm really improves on the SOTA in clustering as only relatively weak baselines were considered.", "I believe that considering more solid baselines for the clustering experiments would help." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 2, "total": 7 }
0.428571
0
0.571429
0.714286
0.428571
0.428571
0.428571
0.285714
7
1
0
1
1
1
1
1
1
3.285714
2.717547
0.568167
Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization
An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.
This paper posits that similar input examples will have similar gradients, leading to a gradient *coherence* phenomenon. A simple argument then suggests that the loss should decrease much more rapidly when gradients cohere than when they do not. This hypothesis and analysis is supported with clever experiments that confirm some of the predictions of this theory. Furthermore, since, as the authors emphasize, their hypothesis is prescriptive, they are able to suggest a novel regularization technique and show that it is effective in a simple setting. I find the coherent gradient hypothesis to be simple and reasonable. Furthermore, the paper is written very clearly, and as far as I know the main idea is original (although since it is a rather simple phenomenon, it*s possible something similar could have appeared elsewhere in the literature). Perhaps more importantly, the associated experiments are very cleverly designed and are very supportive of the hypothesis. For instance, Figure 1 provides compelling evidence for the coherent gradient hypothesis and in particular motivates the way phenomenon of early stopping arises a natural consequence. Overall, the paper is of very high quality, and I recommend its acceptance. One criticism perhaps is whether these results are sufficiently significant. On the one hand, most of the experiments were done on small network and dataset combinations -- and the proposed regularization scheme as is will not scale to practical problems of interest. On the other hand, I really feel like I learned something interesting about gradient descent from reading this paper and absorbing the experimental results -- which is often not something I can say given the large array of reported experimental results in this field. It*s clear that the authors themselves are aware that it*s of interest to extend their results to more realistic settings, and regardless I think that this paper stands alone as is and should be accepted to ICLR.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCoherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization\nAn open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper posits that similar input examples will have similar gradients, leading to a gradient *coherence* phenomenon. A simple argument then suggests that the loss should decrease much more rapidly when gradients cohere than when they do not. This hypothesis and analysis is supported with clever experiments that confirm some of the predictions of this theory. Furthermore, since, as the authors emphasize, their hypothesis is prescriptive, they are able to suggest a novel regularization technique and show that it is effective in a simple setting. I find the coherent gradient hypothesis to be simple and reasonable. Furthermore, the paper is written very clearly, and as far as I know the main idea is original (although since it is a rather simple phenomenon, it*s possible something similar could have appeared elsewhere in the literature). Perhaps more importantly, the associated experiments are very cleverly designed and are very supportive of the hypothesis. For instance, Figure 1 provides compelling evidence for the coherent gradient hypothesis and in particular motivates the way phenomenon of early stopping arises a natural consequence. Overall, the paper is of very high quality, and I recommend its acceptance. One criticism perhaps is whether these results are sufficiently significant. On the one hand, most of the experiments were done on small network and dataset combinations -- and the proposed regularization scheme as is will not scale to practical problems of interest. On the other hand, I really feel like I learned something interesting about gradient descent from reading this paper and absorbing the experimental results -- which is often not something I can say given the large array of reported experimental results in this field. It*s clear that the authors themselves are aware that it*s of interest to extend their results to more realistic settings, and regardless I think that this paper stands alone as is and should be accepted to ICLR.", "role": "assistant" } ]
[ "This paper posits that similar input examples will have similar gradients, leading to a gradient *coherence* phenomenon.", "A simple argument then suggests that the loss should decrease much more rapidly when gradients cohere than when they do not.", "This hypothesis and analysis is supported with clever experiments that confirm some of the predictions of this theory.", "Furthermore, since, as the authors emphasize, their hypothesis is prescriptive, they are able to suggest a novel regularization technique and show that it is effective in a simple setting.", "I find the coherent gradient hypothesis to be simple and reasonable.", "Furthermore, the paper is written very clearly, and as far as I know the main idea is original (although since it is a rather simple phenomenon, it*s possible something similar could have appeared elsewhere in the literature).", "Perhaps more importantly, the associated experiments are very cleverly designed and are very supportive of the hypothesis.", "For instance, Figure 1 provides compelling evidence for the coherent gradient hypothesis and in particular motivates the way phenomenon of early stopping arises a natural consequence.", "Overall, the paper is of very high quality, and I recommend its acceptance.", "One criticism perhaps is whether these results are sufficiently significant.", "On the one hand, most of the experiments were done on small network and dataset combinations -- and the proposed regularization scheme as is will not scale to practical problems of interest.", "On the other hand, I really feel like I learned something interesting about gradient descent from reading this paper and absorbing the experimental results -- which is often not something I can say given the large array of reported experimental results in this field.", "It*s clear that the authors themselves are aware that it*s of interest to extend their results to more realistic settings, and regardless I think that this paper stands alone as is and should be accepted to ICLR." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 7, "materials_and_methods": 6, "praise": 6, "presentation_and_reporting": 1, "results_and_discussion": 8, "suggestion_and_solution": 1, "total": 13 }
0.153846
0.076923
0.538462
0.461538
0.461538
0.076923
0.615385
0.076923
13
1
1
1
1
1
1
1
1
2.461538
2.461538
0
Probabilistic Connection Importance Inference and Lossless Compression of Deep Neural Networks
Deep neural networks (DNNs) can be huge in size, requiring a considerable a mount of energy and computational resources to operate, which limits their applications in numerous scenarios. It is thus of interest to compress DNNs while maintaining their performance levels. We here propose a probabilistic importance inference approach for pruning DNNs. Specifically, we test the significance of the relevance of a connection in a DNN to the DNN’s outputs using a nonparemtric scoring testand keep only those significant ones. Experimental results show that the proposed approach achieves better lossless compression rates than existing techniques
The paper proposes a new way of evaluating the importance of neural connections which can be used for better model compression. The approach uses a non-parametric statistical test to detect the three way interaction among the two nodes and final output. Some small scale experiments show that the approach achieves better compression rate given the same test error. The approach seems interesting in the sense that, unlike existing techniques, it explicitly measures the three way interaction among the two nodes and the output. Also, it removes the average effects and only considers (non-linear) correlation after removing the mean. The explicit link to the final output and removing average effects allows the method to remove more weights without decreasing the loss by much. However, the drawback of the approach is the significantly increased computation due to 1) quadratic complexity for the kernel methods; 2) unable to cache computation among different pairs of nodes (for example, gradient-based approach can compute importance for all node connections in one forward-backward pass). This limits the applicability of the approach to more interesting cases of larger models (for example, models that work on ImageNet) where model compression is of more urgent need. As a result, the significance and impact of the approach is also limited. The experiment results are interesting in that it shows the proposed approach can achieve better compression rates given the same test error tolerance on a few small datasets. However, as mentioned above, these experiments are less convincing than more complicated models on larger datasets, such as ImageNet where model compression has greater impact. In addition, the test errors on smaller datasets are easier to achieve, sometimes tuning the optimization settings such as learning rates can result in significant improvement. Therefore, such results are more like preliminary. The paper is general clear and well written. The experiment section should include more important information such as what kernel bandwidth is used for Gaussian kernel, which greatly affects performance. Also, the main text introducing the proposed statistical test is a bit verbose, and dense in unnecessary notations. I think it can be made more succinct by presenting a high level idea (removing mean, three way correlations) first and then the final results. Some intermediate results can be put into Appendix.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nProbabilistic Connection Importance Inference and Lossless Compression of Deep Neural Networks\nDeep neural networks (DNNs) can be huge in size, requiring a considerable a mount of energy and computational resources to operate, which limits their applications in numerous scenarios. It is thus of interest to compress DNNs while maintaining their performance levels. We here propose a probabilistic importance inference approach for pruning DNNs. Specifically, we test the significance of the relevance of a connection in a DNN to the DNN’s outputs using a nonparemtric scoring testand keep only those significant ones. Experimental results show that the proposed approach achieves better lossless compression rates than existing techniques", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a new way of evaluating the importance of neural connections which can be used for better model compression. The approach uses a non-parametric statistical test to detect the three way interaction among the two nodes and final output. Some small scale experiments show that the approach achieves better compression rate given the same test error. The approach seems interesting in the sense that, unlike existing techniques, it explicitly measures the three way interaction among the two nodes and the output. Also, it removes the average effects and only considers (non-linear) correlation after removing the mean. The explicit link to the final output and removing average effects allows the method to remove more weights without decreasing the loss by much. However, the drawback of the approach is the significantly increased computation due to 1) quadratic complexity for the kernel methods; 2) unable to cache computation among different pairs of nodes (for example, gradient-based approach can compute importance for all node connections in one forward-backward pass). This limits the applicability of the approach to more interesting cases of larger models (for example, models that work on ImageNet) where model compression is of more urgent need. As a result, the significance and impact of the approach is also limited. The experiment results are interesting in that it shows the proposed approach can achieve better compression rates given the same test error tolerance on a few small datasets. However, as mentioned above, these experiments are less convincing than more complicated models on larger datasets, such as ImageNet where model compression has greater impact. In addition, the test errors on smaller datasets are easier to achieve, sometimes tuning the optimization settings such as learning rates can result in significant improvement. Therefore, such results are more like preliminary. The paper is general clear and well written. The experiment section should include more important information such as what kernel bandwidth is used for Gaussian kernel, which greatly affects performance. Also, the main text introducing the proposed statistical test is a bit verbose, and dense in unnecessary notations. I think it can be made more succinct by presenting a high level idea (removing mean, three way correlations) first and then the final results. Some intermediate results can be put into Appendix.", "role": "assistant" } ]
[ "The paper proposes a new way of evaluating the importance of neural connections which can be used for better model compression.", "The approach uses a non-parametric statistical test to detect the three way interaction among the two nodes and final output.", "Some small scale experiments show that the approach achieves better compression rate given the same test error.", "The approach seems interesting in the sense that, unlike existing techniques, it explicitly measures the three way interaction among the two nodes and the output.", "Also, it removes the average effects and only considers (non-linear) correlation after removing the mean.", "The explicit link to the final output and removing average effects allows the method to remove more weights without decreasing the loss by much.", "However, the drawback of the approach is the significantly increased computation due to 1) quadratic complexity for the kernel methods; 2) unable to cache computation among different pairs of nodes (for example, gradient-based approach can compute importance for all node connections in one forward-backward pass).", "This limits the applicability of the approach to more interesting cases of larger models (for example, models that work on ImageNet) where model compression is of more urgent need.", "As a result, the significance and impact of the approach is also limited.", "The experiment results are interesting in that it shows the proposed approach can achieve better compression rates given the same test error tolerance on a few small datasets.", "However, as mentioned above, these experiments are less convincing than more complicated models on larger datasets, such as ImageNet where model compression has greater impact.", "In addition, the test errors on smaller datasets are easier to achieve, sometimes tuning the optimization settings such as learning rates can result in significant improvement.", "Therefore, such results are more like preliminary.", "The paper is general clear and well written.", "The experiment section should include more important information such as what kernel bandwidth is used for Gaussian kernel, which greatly affects performance.", "Also, the main text introducing the proposed statistical test is a bit verbose, and dense in unnecessary notations.", "I think it can be made more succinct by presenting a high level idea (removing mean, three way correlations) first and then the final results.", "Some intermediate results can be put into Appendix." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 14, "praise": 4, "presentation_and_reporting": 5, "results_and_discussion": 13, "suggestion_and_solution": 4, "total": 18 }
0.222222
0
0.222222
0.777778
0.222222
0.277778
0.722222
0.222222
18
1
0
1
1
1
1
1
1
2.666667
2.272106
0.394561
MEMO: A Deep Network for Flexible Combination of Episodic Memories
Recent research developing neural network architectures with external memory have often used the benchmark bAbI question and answering dataset which provides a challenging number of tasks requiring reasoning. Here we employed a classic associative inference task from the human neuroscience literature in order to more carefully probe the reasoning capacity of existing memory-augmented architectures. This task is thought to capture the essence of reasoning -- the appreciation of distant relationships among elements distributed across multiple facts or memories. Surprisingly, we found that current architectures struggle to reason over long distance associations. Similar results were obtained on a more complex task involving finding the shortest path between nodes in a path. We therefore developed a novel architecture, MEMO, endowed with the capacity to reason over longer distances. This was accomplished with the addition of two novel components. First, it introduces a separation between memories/facts stored in external memory and the items that comprise these facts in external memory. Second, it makes use of an adaptive retrieval mechanism, allowing a variable number of ‘memory hops’ before the answer is produced. MEMO is capable of solving our novel reasoning tasks, as well as all 20 tasks in bAbI.
Summary: This paper presents a new task (paired associate inference), drawn from cognitive psychology, which requires linking many pieces of information together to make inferences with long range dependencies. Experimental results show that standard memory architectures fail on these tasks. To redress this, the paper proposes a new memory architecture with several new features that allow for much better performance on the paired associate task. Finally, the paper undertakes systematic experiments on more traditional domains like shortest path problems, showing that the new architecture achieves modest improvements. Major comments: Overall this is an interesting and useful work which uses a task from cognitive psychology to illuminate reasoning limitations in prior memory augmented neural networks. It then proposes several architectural and algorithmic fixes to improve performance. The resulting solution appears to work based on the experimental evaluation, and although this does not lead to substantial improvements in the state of the art on the bAbI dataset, it seems likely that these improvements could be useful for even harder task settings. The new algorithmic improvements are justified mainly by their performance in experiments, which is completely acceptable if the experiments have been done to a high standard. Nevertheless, it would be wonderful if more insight could be gained about why exactly this configuration of architectural changes was selected and what they contribute. The ablation experiments go some way to addressing this question and perhaps the paper would benefit from a greater discussion of these results in the main text. The experiments appear to have been done to a high standard, and it is promising that the method’s advantages are more pronounced as the task gets harder (eg by demanding longer chains of inference). The paper could also be improved by more clearly describing how hyper parameters were selected for each experiment (including the different ablation studies). Why were hyperparamters sometimes chosen based on training loss and other times chosen based on validation loss? In general it would be useful to explain the selection procedure very carefully for each result, since it appears to have differed between evaluations. The paper is reasonably easy to follow, but could be streamlined to enable more discussion of material that is currently in the appendix. Typos: ENM->EMN on pg2
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMEMO: A Deep Network for Flexible Combination of Episodic Memories\nRecent research developing neural network architectures with external memory have often used the benchmark bAbI question and answering dataset which provides a challenging number of tasks requiring reasoning. Here we employed a classic associative inference task from the human neuroscience literature in order to more carefully probe the reasoning capacity of existing memory-augmented architectures. This task is thought to capture the essence of reasoning -- the appreciation of distant relationships among elements distributed across multiple facts or memories. Surprisingly, we found that current architectures struggle to reason over long distance associations. Similar results were obtained on a more complex task involving finding the shortest path between nodes in a path. We therefore developed a novel architecture, MEMO, endowed with the capacity to reason over longer distances. This was accomplished with the addition of two novel components. First, it introduces a separation between memories/facts stored in external memory and the items that comprise these facts in external memory. Second, it makes use of an adaptive retrieval mechanism, allowing a variable number of ‘memory hops’ before the answer is produced. MEMO is capable of solving our novel reasoning tasks, as well as all 20 tasks in bAbI.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary: This paper presents a new task (paired associate inference), drawn from cognitive psychology, which requires linking many pieces of information together to make inferences with long range dependencies. Experimental results show that standard memory architectures fail on these tasks. To redress this, the paper proposes a new memory architecture with several new features that allow for much better performance on the paired associate task. Finally, the paper undertakes systematic experiments on more traditional domains like shortest path problems, showing that the new architecture achieves modest improvements. Major comments: Overall this is an interesting and useful work which uses a task from cognitive psychology to illuminate reasoning limitations in prior memory augmented neural networks. It then proposes several architectural and algorithmic fixes to improve performance. The resulting solution appears to work based on the experimental evaluation, and although this does not lead to substantial improvements in the state of the art on the bAbI dataset, it seems likely that these improvements could be useful for even harder task settings. The new algorithmic improvements are justified mainly by their performance in experiments, which is completely acceptable if the experiments have been done to a high standard. Nevertheless, it would be wonderful if more insight could be gained about why exactly this configuration of architectural changes was selected and what they contribute. The ablation experiments go some way to addressing this question and perhaps the paper would benefit from a greater discussion of these results in the main text. The experiments appear to have been done to a high standard, and it is promising that the method’s advantages are more pronounced as the task gets harder (eg by demanding longer chains of inference). The paper could also be improved by more clearly describing how hyper parameters were selected for each experiment (including the different ablation studies). Why were hyperparamters sometimes chosen based on training loss and other times chosen based on validation loss? In general it would be useful to explain the selection procedure very carefully for each result, since it appears to have differed between evaluations. The paper is reasonably easy to follow, but could be streamlined to enable more discussion of material that is currently in the appendix. Typos: ENM->EMN on pg2", "role": "assistant" } ]
[ "Summary: This paper presents a new task (paired associate inference), drawn from cognitive psychology, which requires linking many pieces of information together to make inferences with long range dependencies.", "Experimental results show that standard memory architectures fail on these tasks.", "To redress this, the paper proposes a new memory architecture with several new features that allow for much better performance on the paired associate task.", "Finally, the paper undertakes systematic experiments on more traditional domains like shortest path problems, showing that the new architecture achieves modest improvements.", "Major comments: Overall this is an interesting and useful work which uses a task from cognitive psychology to illuminate reasoning limitations in prior memory augmented neural networks.", "It then proposes several architectural and algorithmic fixes to improve performance.", "The resulting solution appears to work based on the experimental evaluation, and although this does not lead to substantial improvements in the state of the art on the bAbI dataset, it seems likely that these improvements could be useful for even harder task settings.", "The new algorithmic improvements are justified mainly by their performance in experiments, which is completely acceptable if the experiments have been done to a high standard.", "Nevertheless, it would be wonderful if more insight could be gained about why exactly this configuration of architectural changes was selected and what they contribute.", "The ablation experiments go some way to addressing this question and perhaps the paper would benefit from a greater discussion of these results in the main text.", "The experiments appear to have been done to a high standard, and it is promising that the method’s advantages are more pronounced as the task gets harder (eg by demanding longer chains of inference).", "The paper could also be improved by more clearly describing how hyper parameters were selected for each experiment (including the different ablation studies).", "Why were hyperparamters sometimes chosen based on training loss and other times chosen based on validation loss?", "In general it would be useful to explain the selection procedure very carefully for each result, since it appears to have differed between evaluations.", "The paper is reasonably easy to follow, but could be streamlined to enable more discussion of material that is currently in the appendix.", "Typos: ENM->EMN on pg2" ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 12, "praise": 3, "presentation_and_reporting": 5, "results_and_discussion": 6, "suggestion_and_solution": 5, "total": 16 }
0
0
0.3125
0.75
0.1875
0.3125
0.375
0.3125
16
0
0
1
1
1
1
1
1
2.25
2.107958
0.142042
Decoding As Dynamic Programming For Recurrent Autoregressive Models
Decoding in autoregressive models (ARMs) consists of searching for a high scoring output sequence under the trained model. Standard decoding methods, based on unidirectional greedy algorithm or beam search, are suboptimal due to error propagation and myopic decisions which do not account for future steps in the generation process. In this paper we present a novel decoding approach based on the method of auxiliary coordinates (Carreira-Perpinan & Wang, 2014) to address the aforementioned shortcomings. Our method introduces discrete variables for output tokens, and auxiliary continuous variables representing the states of the underlying ARM. The auxiliary variables lead to a factor graph approximation of the ARM, whose maximum a posteriori (MAP) inference is found exactly using dynamic programming. The MAP inference is then used to recreate an improved factor graph approximation of the ARM via updated auxiliary variables. We then extend our approach to decode in an ensemble of ARMs, possibly with different generation orders, which is out of reach for the standard unidirectional decoding algorithms. Experiments on the text infilling task over SWAG and Daily Dialogue datasets show that our decoding method is superior to strong unidirectional decoding baselines.
This work introduces a new algorithm to improve decoding in discrete autoregressive models (ARMs). Because exact decoding of ARMs is computationally difficult, the contribution of this paper is to introduce a new approximate solution. The authors show experimentally that this new framework is very effective and outperforms multiple baselines such as greedy and bean search and another text filing algorithm called TIGC on a text-filling benchmark. This work is not in my domain of expertise so I am not able to have a very accurate evaluation. However, based on the references cited in this paper, there are other approximate solution for ARMs and I believe the authors need to use those as baselines to show that the proposed approximate solution is useful.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDecoding As Dynamic Programming For Recurrent Autoregressive Models\nDecoding in autoregressive models (ARMs) consists of searching for a high scoring output sequence under the trained model. Standard decoding methods, based on unidirectional greedy algorithm or beam search, are suboptimal due to error propagation and myopic decisions which do not account for future steps in the generation process. In this paper we present a novel decoding approach based on the method of auxiliary coordinates (Carreira-Perpinan & Wang, 2014) to address the aforementioned shortcomings. Our method introduces discrete variables for output tokens, and auxiliary continuous variables representing the states of the underlying ARM. The auxiliary variables lead to a factor graph approximation of the ARM, whose maximum a posteriori (MAP) inference is found exactly using dynamic programming. The MAP inference is then used to recreate an improved factor graph approximation of the ARM via updated auxiliary variables. We then extend our approach to decode in an ensemble of ARMs, possibly with different generation orders, which is out of reach for the standard unidirectional decoding algorithms. Experiments on the text infilling task over SWAG and Daily Dialogue datasets show that our decoding method is superior to strong unidirectional decoding baselines.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work introduces a new algorithm to improve decoding in discrete autoregressive models (ARMs). Because exact decoding of ARMs is computationally difficult, the contribution of this paper is to introduce a new approximate solution. The authors show experimentally that this new framework is very effective and outperforms multiple baselines such as greedy and bean search and another text filing algorithm called TIGC on a text-filling benchmark. This work is not in my domain of expertise so I am not able to have a very accurate evaluation. However, based on the references cited in this paper, there are other approximate solution for ARMs and I believe the authors need to use those as baselines to show that the proposed approximate solution is useful.", "role": "assistant" } ]
[ "This work introduces a new algorithm to improve decoding in discrete autoregressive models (ARMs).", "Because exact decoding of ARMs is computationally difficult, the contribution of this paper is to introduce a new approximate solution.", "The authors show experimentally that this new framework is very effective and outperforms multiple baselines such as greedy and bean search and another text filing algorithm called TIGC on a text-filling benchmark.", "This work is not in my domain of expertise so I am not able to have a very accurate evaluation.", "However, based on the references cited in this paper, there are other approximate solution for ARMs and I believe the authors need to use those as baselines to show that the proposed approximate solution is useful." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 5, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 0, "suggestion_and_solution": 1, "total": 5 }
0.2
0
0.6
1
0.2
0.2
0
0.2
5
1
0
1
1
1
1
0
1
2.4
1.389924
1.010076
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out of the box for representative models: ResNet50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization.
The paper suggest the method to train neural networks using 8 bit floating point precision values. The method requires a hardware modification to evaluate per tensor statistics, which intuitively capture dynamic range that a given tensor values may fall into. Those per tensor statistics are used to map 8-bit representation into a rational number. Statistics themselves are calculated using 32 bit floating point numbers. While the method is interesting, I do not think it is practical due to the required hardware modifications. I am by no means not a hardware design expert, but I am not convinced that the gain of using 8 vs 16 bit floating point numbers outweights any extra complexity of hardware implementation. Mapping between representation and actual numerical values is much more complex than when using standard floating point representation (as any given numerical value is defined by a single its 8 bit representation + alpha + beta tensor statistics), integrated circuitry used to execute arithmetic operations would likely be much more complex than when using standard floating point operations. It is plausible that the added complexity (in terms of transistor count) would negate any potential price or energy savings over simply using 16 bit floating point representation. This should definitely be discussed in depth in this paper where the main contribution is an algorithm to be implemented in hardware. As different layers will have different (alpha,beta) tensor statistics, from the computational perspective they would look like different data types which would be cast into the same common type before doing any arithmetic operations on them. This would almost certainly require extra computational steps to be done potentially negating computational benefits over simply using 16 bit floating point numbers. The algorithm evaluates per-tensor and not per-tensor-element statistics. This may work well in cases where all entries in a given tensor are of the same dynamic range, but may break down in other cases: for example, diagonal entries in a matrix multiplication layer inside LSTM may have very different statistics comparing to non diagonal entries.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nShifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks\nTraining with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out of the box for representative models: ResNet50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper suggest the method to train neural networks using 8 bit floating point precision values. The method requires a hardware modification to evaluate per tensor statistics, which intuitively capture dynamic range that a given tensor values may fall into. Those per tensor statistics are used to map 8-bit representation into a rational number. Statistics themselves are calculated using 32 bit floating point numbers. While the method is interesting, I do not think it is practical due to the required hardware modifications. I am by no means not a hardware design expert, but I am not convinced that the gain of using 8 vs 16 bit floating point numbers outweights any extra complexity of hardware implementation. Mapping between representation and actual numerical values is much more complex than when using standard floating point representation (as any given numerical value is defined by a single its 8 bit representation + alpha + beta tensor statistics), integrated circuitry used to execute arithmetic operations would likely be much more complex than when using standard floating point operations. It is plausible that the added complexity (in terms of transistor count) would negate any potential price or energy savings over simply using 16 bit floating point representation. This should definitely be discussed in depth in this paper where the main contribution is an algorithm to be implemented in hardware. As different layers will have different (alpha,beta) tensor statistics, from the computational perspective they would look like different data types which would be cast into the same common type before doing any arithmetic operations on them. This would almost certainly require extra computational steps to be done potentially negating computational benefits over simply using 16 bit floating point numbers. The algorithm evaluates per-tensor and not per-tensor-element statistics. This may work well in cases where all entries in a given tensor are of the same dynamic range, but may break down in other cases: for example, diagonal entries in a matrix multiplication layer inside LSTM may have very different statistics comparing to non diagonal entries.", "role": "assistant" } ]
[ "The paper suggest the method to train neural networks using 8 bit floating point precision values.", "The method requires a hardware modification to evaluate per tensor statistics, which intuitively capture dynamic range that a given tensor values may fall into.", "Those per tensor statistics are used to map 8-bit representation into a rational number.", "Statistics themselves are calculated using 32 bit floating point numbers.", "While the method is interesting, I do not think it is practical due to the required hardware modifications.", "I am by no means not a hardware design expert, but I am not convinced that the gain of using 8 vs 16 bit floating point numbers outweights any extra complexity of hardware implementation.", "Mapping between representation and actual numerical values is much more complex than when using standard floating point representation (as any given numerical value is defined by a single its 8 bit representation + alpha + beta tensor statistics), integrated circuitry used to execute arithmetic operations would likely be much more complex than when using standard floating point operations.", "It is plausible that the added complexity (in terms of transistor count) would negate any potential price or energy savings over simply using 16 bit floating point representation.", "This should definitely be discussed in depth in this paper where the main contribution is an algorithm to be implemented in hardware.", "As different layers will have different (alpha,beta) tensor statistics, from the computational perspective they would look like different data types which would be cast into the same common type before doing any arithmetic operations on them.", "This would almost certainly require extra computational steps to be done potentially negating computational benefits over simply using 16 bit floating point numbers.", "The algorithm evaluates per-tensor and not per-tensor-element statistics.", "This may work well in cases where all entries in a given tensor are of the same dynamic range, but may break down in other cases: for example, diagonal entries in a matrix multiplication layer inside LSTM may have very different statistics comparing to non diagonal entries." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 13, "praise": 1, "presentation_and_reporting": 4, "results_and_discussion": 2, "suggestion_and_solution": 5, "total": 13 }
0.153846
0
0.153846
1
0.076923
0.307692
0.153846
0.384615
13
1
0
1
1
1
1
1
1
2.230769
2.230769
0
Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks
Current artificial neural networks (ANNs) can perform and excel at a variety of tasks ranging from image classification to spam detection through training on large datasets of labeled data. While the trained network usually performs well on similar testing data, certain inputs that differ even slightly from the training data may trigger unpredictable behavior. Due to this limitation, it is possible to generate inputs with very small designed perturbations that can result in misclassification. These adversarial attacks present a security risk to deployed ANNs and indicate a divergence between how ANNs and humans perform classification. Humans are robust at behaving in the presence of noise and are capable of correctly classifying objects that are occluded, blurred, or otherwise distorted. It has been hypothesized that sleep promotes generalization and improves robustness against noise in animals and humans. In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as increasing ANN classification robustness. We compare the sleep algorithm*s performance on various robustness tasks with two previously proposed adversarial defenses, defensive distillation and fine-tuning. We report an increase in robustness after sleep to adversarial attacks as well as to general image distortions for three datasets: MNIST, CUB200, and a toy dataset. Overall, these results demonstrate the potential for biologically inspired solutions to solve existing problems in ANNs and guide the development of more robust, human-like ANNs.
The paper proposes an ANN training method for improving adversarial robustness and generalization, inspired by biological sleep. I*m leaning towards accepting as it seems to be an original concept and has fairly extensive empirical results that are somewhat promising. The idea of a sleep phase as an alternative to explicit adversarial or generalization training is interesting. The results suggest that the approach works reasonably well in many cases. Suggestions for improvement / clarification: - The mapping from biological sleep to the actual algorithm + pseudocode used could benefit from more thorough explanation. It is not clear which choices are arbitrary vs well-principled. - Was the optimal sleep duration determined empirically for each experiment? - I agree with the authors* proposed future work of better understanding and standardizing this approach. - Consider combining this approach with the existing adversarial or generalizing approaches (instead of as an alternative). Do they complement each other?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBiologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks\nCurrent artificial neural networks (ANNs) can perform and excel at a variety of tasks ranging from image classification to spam detection through training on large datasets of labeled data. While the trained network usually performs well on similar testing data, certain inputs that differ even slightly from the training data may trigger unpredictable behavior. Due to this limitation, it is possible to generate inputs with very small designed perturbations that can result in misclassification. These adversarial attacks present a security risk to deployed ANNs and indicate a divergence between how ANNs and humans perform classification. Humans are robust at behaving in the presence of noise and are capable of correctly classifying objects that are occluded, blurred, or otherwise distorted. It has been hypothesized that sleep promotes generalization and improves robustness against noise in animals and humans. In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as increasing ANN classification robustness. We compare the sleep algorithm*s performance on various robustness tasks with two previously proposed adversarial defenses, defensive distillation and fine-tuning. We report an increase in robustness after sleep to adversarial attacks as well as to general image distortions for three datasets: MNIST, CUB200, and a toy dataset. Overall, these results demonstrate the potential for biologically inspired solutions to solve existing problems in ANNs and guide the development of more robust, human-like ANNs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes an ANN training method for improving adversarial robustness and generalization, inspired by biological sleep. I*m leaning towards accepting as it seems to be an original concept and has fairly extensive empirical results that are somewhat promising. The idea of a sleep phase as an alternative to explicit adversarial or generalization training is interesting. The results suggest that the approach works reasonably well in many cases. Suggestions for improvement / clarification: - The mapping from biological sleep to the actual algorithm + pseudocode used could benefit from more thorough explanation. It is not clear which choices are arbitrary vs well-principled. - Was the optimal sleep duration determined empirically for each experiment? - I agree with the authors* proposed future work of better understanding and standardizing this approach. - Consider combining this approach with the existing adversarial or generalizing approaches (instead of as an alternative). Do they complement each other?", "role": "assistant" } ]
[ "The paper proposes an ANN training method for improving adversarial robustness and generalization, inspired by biological sleep.", "I*m leaning towards accepting as it seems to be an original concept and has fairly extensive empirical results that are somewhat promising.", "The idea of a sleep phase as an alternative to explicit adversarial or generalization training is interesting.", "The results suggest that the approach works reasonably well in many cases.", "Suggestions for improvement / clarification: - The mapping from biological sleep to the actual algorithm + pseudocode used could benefit from more thorough explanation.", "It is not clear which choices are arbitrary vs well-principled.", "- Was the optimal sleep duration determined empirically for each experiment?", "- I agree with the authors* proposed future work of better understanding and standardizing this approach.", "- Consider combining this approach with the existing adversarial or generalizing approaches (instead of as an alternative).", "Do they complement each other?" ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 7, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 10 }
0.1
0
0.4
0.7
0.3
0.1
0.3
0.3
10
1
0
1
1
1
1
1
1
2.2
2.057958
0.142042
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. This success is often attributed to large amounts of training data. On the contrary, a number of recent experimental results suggest that a major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this structural bias towards simple, natural images is that one can remove various kinds of noise and corruptions from a corrupted natural image by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to this single image. While this over-parameterized model can eventually fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent one obtains the uncorrupted image, without using any training data. This intriguing phenomena has enabled state-of-the-art CNN-based denoising as well as regularization in linear inverse problems such as compressive sensing. In this paper we take a step towards demystifying this experimental phenomena by attributing this effect to particular architectural choices of convolutional networks, namely fixed convolutional operations. We then formally characterize the dynamics of fitting a two layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. This results relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion.
The paper provides a theoretical study of regularization capabilities of over-parameterized convolutional generators trained via gradient descent, in the context of denoising with an approach similar to the *deep image prior*. The authors show that when using appropriate upsampling operators, gradient descent biases the reconstructed images towards low-frequency components, while the noise components, which typically consist of high-frequency patterns, take longer to fit, so that early stopping can provide a useful bias for denoising. The proofs are based on recent results for *lazy* training of over-parameterized networks, namely neural tangent kernels. I find the paper interesting and novel, as it is the first to my knowledge to study the implicit bias of optimization for such generator-based denoising approaches, and provides interesting theoretical and empirical denoising results. While the current results are for a simple architecture with one hidden layer, they may pave the way for a formal study of more complex architectures. Nevertheless, the paper presents some limitations which should be addressed further in the paper: * in terms of denoising capabilities, it seems that the results for the non-linear, over-parameterized network are essentially similar to the linear case (with an appropriately designed operator J in Section 3). Are there any benefits to the non-linear case that I am missing? Whether or not this is a limitation of the study, it should be discussed further in the paper. * Some comments on the empirical results reported in Figure 2: (i) how are hyper-parameters chosen? this seems to be crucial for the effective use of such denoising strategies, and it is unclear how robust these methods are, e.g., to the early stopping time; (ii) there are other methods that only learn on the given image (thus without the need for a training set) which outperform BM3D [e.g. 1,2], it is unclear how the proposed methods compare to those. other minor comments: - Figure 2 *maintained on a large test set of images* more details are welcome, including on hyperparameter selection - Section 1.1: * *demystifying* -> understanding? * different notations are used for the (same?) iterates, C^t and C_tau - Section 2.3 * *it follow that optimizing...*: please clarify. I agree that the optimum is the same, but is the implicit bias from initialization the same? - end of Section 3: early stoped -> stopped - Section 4 * after Definition 2, *this in turn implies that the jacobian spectrum ...rapidly*: clarify, is it a consequence of the analysis? * after Thm 1: maybe give some insight about the proof and the use of NTK? also, discuss similarities or differences with the linear case [1] Dabov et al. (2009) BM3D Image Denoising with Shape-Adaptive Principal Component Analysis [2] Mairal et al. (2009) Non-local Sparse Models for Image Restoration
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDenoising and Regularization via Exploiting the Structural Bias of Convolutional Generators\nConvolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. This success is often attributed to large amounts of training data. On the contrary, a number of recent experimental results suggest that a major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this structural bias towards simple, natural images is that one can remove various kinds of noise and corruptions from a corrupted natural image by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to this single image. While this over-parameterized model can eventually fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent one obtains the uncorrupted image, without using any training data. This intriguing phenomena has enabled state-of-the-art CNN-based denoising as well as regularization in linear inverse problems such as compressive sensing. In this paper we take a step towards demystifying this experimental phenomena by attributing this effect to particular architectural choices of convolutional networks, namely fixed convolutional operations. We then formally characterize the dynamics of fitting a two layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. This results relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper provides a theoretical study of regularization capabilities of over-parameterized convolutional generators trained via gradient descent, in the context of denoising with an approach similar to the *deep image prior*. The authors show that when using appropriate upsampling operators, gradient descent biases the reconstructed images towards low-frequency components, while the noise components, which typically consist of high-frequency patterns, take longer to fit, so that early stopping can provide a useful bias for denoising. The proofs are based on recent results for *lazy* training of over-parameterized networks, namely neural tangent kernels. I find the paper interesting and novel, as it is the first to my knowledge to study the implicit bias of optimization for such generator-based denoising approaches, and provides interesting theoretical and empirical denoising results. While the current results are for a simple architecture with one hidden layer, they may pave the way for a formal study of more complex architectures. Nevertheless, the paper presents some limitations which should be addressed further in the paper: * in terms of denoising capabilities, it seems that the results for the non-linear, over-parameterized network are essentially similar to the linear case (with an appropriately designed operator J in Section 3). Are there any benefits to the non-linear case that I am missing? Whether or not this is a limitation of the study, it should be discussed further in the paper. * Some comments on the empirical results reported in Figure 2: (i) how are hyper-parameters chosen? this seems to be crucial for the effective use of such denoising strategies, and it is unclear how robust these methods are, e.g., to the early stopping time; (ii) there are other methods that only learn on the given image (thus without the need for a training set) which outperform BM3D [e.g. 1,2], it is unclear how the proposed methods compare to those. other minor comments: - Figure 2 *maintained on a large test set of images* more details are welcome, including on hyperparameter selection - Section 1.1: * *demystifying* -> understanding? * different notations are used for the (same?) iterates, C^t and C_tau - Section 2.3 * *it follow that optimizing...*: please clarify. I agree that the optimum is the same, but is the implicit bias from initialization the same? - end of Section 3: early stoped -> stopped - Section 4 * after Definition 2, *this in turn implies that the jacobian spectrum ...rapidly*: clarify, is it a consequence of the analysis? * after Thm 1: maybe give some insight about the proof and the use of NTK? also, discuss similarities or differences with the linear case [1] Dabov et al. (2009) BM3D Image Denoising with Shape-Adaptive Principal Component Analysis [2] Mairal et al. (2009) Non-local Sparse Models for Image Restoration", "role": "assistant" } ]
[ "The paper provides a theoretical study of regularization capabilities of over-parameterized convolutional generators trained via gradient descent, in the context of denoising with an approach similar to the *deep image prior*.", "The authors show that when using appropriate upsampling operators, gradient descent biases the reconstructed images towards low-frequency components, while the noise components, which typically consist of high-frequency patterns, take longer to fit, so that early stopping can provide a useful bias for denoising.", "The proofs are based on recent results for *lazy* training of over-parameterized networks, namely neural tangent kernels.", "I find the paper interesting and novel, as it is the first to my knowledge to study the implicit bias of optimization for such generator-based denoising approaches, and provides interesting theoretical and empirical denoising results.", "While the current results are for a simple architecture with one hidden layer, they may pave the way for a formal study of more complex architectures.", "Nevertheless, the paper presents some limitations which should be addressed further in the paper: * in terms of denoising capabilities, it seems that the results for the non-linear, over-parameterized network are essentially similar to the linear case (with an appropriately designed operator J in Section 3).", "Are there any benefits to the non-linear case that I am missing?", "Whether or not this is a limitation of the study, it should be discussed further in the paper.", "* Some comments on the empirical results reported in Figure 2: (i) how are hyper-parameters chosen?", "this seems to be crucial for the effective use of such denoising strategies, and it is unclear how robust these methods are, e.g., to the early stopping time; (ii) there are other methods that only learn on the given image (thus without the need for a training set) which outperform BM3D [e.g.", "1,2], it is unclear how the proposed methods compare to those.", "other minor comments: - Figure 2 *maintained on a large test set of images* more details are welcome, including on hyperparameter selection - Section 1.1: * *demystifying* -> understanding?", "* different notations are used for the (same?)", "iterates, C^t and C_tau - Section 2.3 * *it follow that optimizing...*: please clarify.", "I agree that the optimum is the same, but is the implicit bias from initialization the same?", "- end of Section 3: early stoped -> stopped - Section 4 * after Definition 2, *this in turn implies that the jacobian spectrum ...rapidly*: clarify, is it a consequence of the analysis?", "* after Thm 1: maybe give some insight about the proof and the use of NTK?", "also, discuss similarities or differences with the linear case [1] Dabov et al.", "(2009) BM3D Image Denoising with Shape-Adaptive Principal Component Analysis [2] Mairal et al.", "(2009) Non-local Sparse Models for Image Restoration" ]
{ "criticism": 3, "example": 3, "importance_and_relevance": 3, "materials_and_methods": 14, "praise": 2, "presentation_and_reporting": 5, "results_and_discussion": 8, "suggestion_and_solution": 7, "total": 20 }
0.15
0.15
0.15
0.7
0.1
0.25
0.4
0.35
20
1
1
1
1
1
1
1
1
2.25
1.476661
0.773339
Decoupling Representation and Classifier for Long-Tailed Recognition
The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but all of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability at little to no cost by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.
The paper tries to handle the class imbalance problem by decoupling the learning process into representation learning and classification, in contrast to the current methods that jointly learn both of them. They comprehensively study several sampling methods for representation learning and different strategies for classification. They find that instance-balanced sampling gives the best representation, and simply adjusting the classifier will equip the model with long-tailed recognition ability. They achieve start of art on long-tailed data (ImageNet-LT, Places-LT and iNaturalist). In general, this is paper is an interesting paper. The author propose that instance-balanced sampling already learns the best and most generalizable representations, which is out of common expectation. They perform extensive experiment to illustrate their points. --Writing: This paper is well written in English and is well structured. And there are two typos. One is in the second row of page 3, *… a more continuous decrease [in in] class labels …* and the other one is in the first paragraph of section 5.4, *… report state-of-art results [on on] three common long-tailed benchmarks …*. --Introduction and review: The authors do a comprehensive literature review, listing the main directions for solving the long-tailed recognition problem. They emphasis that these methods all jointly learn the representations and classifiers, which *make it unclear how the long-tailed recognition ability is achieved-is it from learning a better representation or by handling the data imbalance better via shifting classifier decision boundaries*. This motivate them to decouple representation learning and classification. --Experiment: Since this paper decouples the representation learning and classification to *make it clear* whether the long-tailed recognition ability is achieved from better representation or more balanced classifier, I recommend that authors show us some visualization of the feature map besides number on performance. Because I am confused and difficult to image what *better representation* actually looks like. The authors conduct experiment with ResNeXt-{10,50,101,151}, and mainly use ResNeXt-50 for analysis. Will other networks get similar results as that of ResNeXt-50 shown in Figure 1? When showing the results, like Figure 1, 2 and Table 2, it would be better to mention the parameters chosen for au-normalization and other methods. Conclusion: I tend to accept this paper since it is interesting and renews our understanding of the long-tailed recognition ability of neural network and sampling strategies. What*s more, he experiment is comprehensive and rigorous.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDecoupling Representation and Classifier for Long-Tailed Recognition\nThe long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but all of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability at little to no cost by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper tries to handle the class imbalance problem by decoupling the learning process into representation learning and classification, in contrast to the current methods that jointly learn both of them. They comprehensively study several sampling methods for representation learning and different strategies for classification. They find that instance-balanced sampling gives the best representation, and simply adjusting the classifier will equip the model with long-tailed recognition ability. They achieve start of art on long-tailed data (ImageNet-LT, Places-LT and iNaturalist). In general, this is paper is an interesting paper. The author propose that instance-balanced sampling already learns the best and most generalizable representations, which is out of common expectation. They perform extensive experiment to illustrate their points. --Writing: This paper is well written in English and is well structured. And there are two typos. One is in the second row of page 3, *… a more continuous decrease [in in] class labels …* and the other one is in the first paragraph of section 5.4, *… report state-of-art results [on on] three common long-tailed benchmarks …*. --Introduction and review: The authors do a comprehensive literature review, listing the main directions for solving the long-tailed recognition problem. They emphasis that these methods all jointly learn the representations and classifiers, which *make it unclear how the long-tailed recognition ability is achieved-is it from learning a better representation or by handling the data imbalance better via shifting classifier decision boundaries*. This motivate them to decouple representation learning and classification. --Experiment: Since this paper decouples the representation learning and classification to *make it clear* whether the long-tailed recognition ability is achieved from better representation or more balanced classifier, I recommend that authors show us some visualization of the feature map besides number on performance. Because I am confused and difficult to image what *better representation* actually looks like. The authors conduct experiment with ResNeXt-{10,50,101,151}, and mainly use ResNeXt-50 for analysis. Will other networks get similar results as that of ResNeXt-50 shown in Figure 1? When showing the results, like Figure 1, 2 and Table 2, it would be better to mention the parameters chosen for \tau-normalization and other methods. Conclusion: I tend to accept this paper since it is interesting and renews our understanding of the long-tailed recognition ability of neural network and sampling strategies. What*s more, he experiment is comprehensive and rigorous.", "role": "assistant" } ]
[ "The paper tries to handle the class imbalance problem by decoupling the learning process into representation learning and classification, in contrast to the current methods that jointly learn both of them.", "They comprehensively study several sampling methods for representation learning and different strategies for classification.", "They find that instance-balanced sampling gives the best representation, and simply adjusting the classifier will equip the model with long-tailed recognition ability.", "They achieve start of art on long-tailed data (ImageNet-LT, Places-LT and iNaturalist).", "In general, this is paper is an interesting paper.", "The author propose that instance-balanced sampling already learns the best and most generalizable representations, which is out of common expectation.", "They perform extensive experiment to illustrate their points.", "--Writing: This paper is well written in English and is well structured.", "And there are two typos.", "One is in the second row of page 3, *… a more continuous decrease [in in] class labels …* and the other one is in the first paragraph of section 5.4, *… report state-of-art results [on on] three common long-tailed benchmarks …*.", "--Introduction and review: The authors do a comprehensive literature review, listing the main directions for solving the long-tailed recognition problem.", "They emphasis that these methods all jointly learn the representations and classifiers, which *make it unclear how the long-tailed recognition ability is achieved-is it from learning a better representation or by handling the data imbalance better via shifting classifier decision boundaries*.", "This motivate them to decouple representation learning and classification.", "--Experiment: Since this paper decouples the representation learning and classification to *make it clear* whether the long-tailed recognition ability is achieved from better representation or more balanced classifier, I recommend that authors show us some visualization of the feature map besides number on performance.", "Because I am confused and difficult to image what *better representation* actually looks like.", "The authors conduct experiment with ResNeXt-{10,50,101,151}, and mainly use ResNeXt-50 for analysis.", "Will other networks get similar results as that of ResNeXt-50 shown in Figure 1?", "When showing the results, like Figure 1, 2 and Table 2, it would be better to mention the parameters chosen for \tau-normalization and other methods.", "Conclusion: I tend to accept this paper since it is interesting and renews our understanding of the long-tailed recognition ability of neural network and sampling strategies.", "What*s more, he experiment is comprehensive and rigorous." ]
{ "criticism": 2, "example": 2, "importance_and_relevance": 3, "materials_and_methods": 13, "praise": 5, "presentation_and_reporting": 8, "results_and_discussion": 6, "suggestion_and_solution": 4, "total": 20 }
0.1
0.1
0.15
0.65
0.25
0.4
0.3
0.2
20
1
1
1
1
1
1
1
1
2.15
1.376661
0.773339
Decoupling Representation and Classifier for Long-Tailed Recognition
The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but all of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability at little to no cost by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.
This paper proposes to tackle long-tailed classification by treating separately the representation learning and the creation of a classifier for test time. They evaluate their method on several standard long-tailed classification datasets like ImageNet-LT or Places-LT. Pros: * Very well presented and clear * Thorough experiments with baselines and comparisons with competitors * Novel and efficient approach of redesigning the classifier as a post-processing step after the representation training Cons: * I did not find any single value of the *temperature* coefficient that you use for the different datasets! According to Fig 2, it should be around 0.7 for ImageNet-LT but you should clearly specify the used values for all the datasets. For reproducibility. It is also important to know it as it has an impact on how useful is this approach in practice. Because if the coefficients are very different for all the datasets, then the method requires a validation set to find this hyperparameter. * As middle point between NCM and cRT, you could also train a cosine classifier as done in the paper that you cite *Dynamic few-shot visual learning without forgetting* by Gidaris et al. There is pytorch code for it online. I am leaning towards acceptance as the method is clear, easy to implement, well studied through the experiments and has good results on standard benchmarks. The paper also provides interesting insights about long-tailed recognition in general like the effect of the different samplings with the proposed method.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDecoupling Representation and Classifier for Long-Tailed Recognition\nThe long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but all of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability at little to no cost by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to tackle long-tailed classification by treating separately the representation learning and the creation of a classifier for test time. They evaluate their method on several standard long-tailed classification datasets like ImageNet-LT or Places-LT. Pros: * Very well presented and clear * Thorough experiments with baselines and comparisons with competitors * Novel and efficient approach of redesigning the classifier as a post-processing step after the representation training Cons: * I did not find any single value of the *temperature* coefficient that you use for the different datasets! According to Fig 2, it should be around 0.7 for ImageNet-LT but you should clearly specify the used values for all the datasets. For reproducibility. It is also important to know it as it has an impact on how useful is this approach in practice. Because if the coefficients are very different for all the datasets, then the method requires a validation set to find this hyperparameter. * As middle point between NCM and cRT, you could also train a cosine classifier as done in the paper that you cite *Dynamic few-shot visual learning without forgetting* by Gidaris et al. There is pytorch code for it online. I am leaning towards acceptance as the method is clear, easy to implement, well studied through the experiments and has good results on standard benchmarks. The paper also provides interesting insights about long-tailed recognition in general like the effect of the different samplings with the proposed method.", "role": "assistant" } ]
[ "This paper proposes to tackle long-tailed classification by treating separately the representation learning and the creation of a classifier for test time.", "They evaluate their method on several standard long-tailed classification datasets like ImageNet-LT or Places-LT. Pros: * Very well presented and clear * Thorough experiments with baselines and comparisons with competitors * Novel and efficient approach of redesigning the classifier as a post-processing step after the representation training Cons: * I did not find any single value of the *temperature* coefficient that you use for the different datasets!", "According to Fig 2, it should be around 0.7 for ImageNet-LT but you should clearly specify the used values for all the datasets.", "For reproducibility.", "It is also important to know it as it has an impact on how useful is this approach in practice.", "Because if the coefficients are very different for all the datasets, then the method requires a validation set to find this hyperparameter.", "* As middle point between NCM and cRT, you could also train a cosine classifier as done in the paper that you cite *Dynamic few-shot visual learning without forgetting* by Gidaris et al.", "There is pytorch code for it online.", "I am leaning towards acceptance as the method is clear, easy to implement, well studied through the experiments and has good results on standard benchmarks.", "The paper also provides interesting insights about long-tailed recognition in general like the effect of the different samplings with the proposed method." ]
{ "criticism": 1, "example": 1, "importance_and_relevance": 3, "materials_and_methods": 8, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 10 }
0.1
0.1
0.3
0.8
0.3
0.3
0.3
0.3
10
1
1
1
1
1
1
1
1
2.5
2.357958
0.142042
Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework
Learning multilingual representations of text has proven a successful method for many cross-lingual transfer learning tasks. There are two main paradigms for learning such representations: (1) alignment, which maps different independently trained monolingual representations into a shared space, and (2) joint training, which directly learns unified multilingual representations using monolingual and cross-lingual objectives jointly. In this paper, we first conduct direct comparisons of representations learned using both of these methods across diverse cross-lingual tasks. Our empirical results reveal a set of pros and cons for both methods, and show that the relative performance of alignment versus joint training is task-dependent. Stemming from this analysis, we propose a simple and novel framework that combines these two previously mutually-exclusive approaches. Extensive experiments on various tasks demonstrate that our proposed framework alleviates limitations of both approaches, and outperforms existing methods on the MUSE bilingual lexicon induction (BLI) benchmark. We further show that our proposed framework can generalize to contextualized representations and achieves state-of-the-art results on the CoNLL cross-lingual NER benchmark.
This paper proposes a new and simple framework for learning cross-lingual embeddings that, based on well-supported insights, unifies two standard frameworks: joint learning and alignment-based approaches. While I like and acknowledge the fact that the paper examines these frameworks critically and has also some didactic value, I still have some concerns regarding the current experiments, and would like to see some additional analyses in the paper. Honestly, this type of work would work better as a short ACL/EMNLP paper, as the core methodological contribution is a very simple combination of the existing techniques. With joint methods, it is true that shared words can be used as implicit bilingual learning signal which gets propagated further in the model. Even in the case of alignment-based methods, it was shown that this signal can assist in learning shared cross-lingual spaces, but: 1) such spaces are of lower-quality than the spaces learned by relying on seed dictionaries (see e.g., the work of Vulic and Korhonen (ACL 2016)), 2) this is useful only for similar language pairs which use the same script. The paper fails to provide an insightful discussion on how the scores differ when we move towards more distant languages. For instance, English-Chinese is included as a more distant language pair, and the combined method seems to work fine, but the reader is left wondering why that happens. The same is true for English-Russian. The paper should definitely provide more information and insights here. One experiment that would be quite interesting imho is to take seed dictionaries to initialise the joint training method. It will not be unsupervised any more, but it would be very useful to report the differences in results between fully unsupervised joint initialisation (which, as mentioned should work only with more similar languages) and such weakly supervised init: e.g., the method could take all one-to-one translation pairs from the seed dictionary as *shared words*. It would also be interesting to combine such *shared words* and words that are really shared (identically spelled words) as initialisation and measure how it affects the results, also in light of differences in language distance. Adding one such experiment would make the paper more interesting. Some recent strong baselines are missing: e.g., I wonder how the model that does self-learning on top of seed dictionaries (similar to the work of Vulic et al., EMNLP 2019) compares to the proposed method. Also, can we use the same self-learning technique with the combined method here? Would that lead to improved results? Another work I would like to see comparisons to is the work of Zhang et al. (ACL 2019) and despite the fact that the authors explicitly stated that they decided not to compare to the work of Artetxe et al. (ACL 2019) as they see it as concurrent work, I would still like to see that comparison as the model of Artetxe et al. seems very relevant to the presented work. I do not see the extension to contextualized representations (described in Section 3.2) as a real contribution: this is a very straightforward method to apply an alignment-based method on multilingual BERT representations, and very similar techniques have been probed in previous work (e.g., by Aldarmaki & Diab), only the bilingual signal/dictionary came from parallel data instead. Finally, as mentioned before, one of the must-have experiments is including more (distant and diverse) language pairs and analysing the results across such language pairs, with an aim to further understand how the levels of sharing, over-sharing, and non-isomorphism affect performance. Also, while the authors state that reduced isomorphism is the key problem of alignment-based methods, I still fail to see how exactly the alignment refinement step does not suffer from that problem? More discussion is needed here. Other comments: - It would be useful to also point to the survey paper of Ruder et al. (JAIR 2019) where the difference between alignment-based and joint models is described in more detail and could inform an interested reader beyond the confines of the related work section in this paper. Similarly, differences between joint models and alignment-based models have also been analysed by Upadhyay et al. (ACL 2016); Vulic and Korhonen (ACL 2016). - I like how Figure 1 clearly visualizes the main intuitions behind the proposed framework, and how mitigating the oversharing problem leads to better alignments. This was very nice. - In light of the problems with silver standard MUSE datasets (see also the recent work of Kementchedjhieva, EMNLP 2019), I would suggest to run BLI experiments on more language pairs: e.g., there are BLI datasets of Glavas et al available for 28 language pairs.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework\nLearning multilingual representations of text has proven a successful method for many cross-lingual transfer learning tasks. There are two main paradigms for learning such representations: (1) alignment, which maps different independently trained monolingual representations into a shared space, and (2) joint training, which directly learns unified multilingual representations using monolingual and cross-lingual objectives jointly. In this paper, we first conduct direct comparisons of representations learned using both of these methods across diverse cross-lingual tasks. Our empirical results reveal a set of pros and cons for both methods, and show that the relative performance of alignment versus joint training is task-dependent. Stemming from this analysis, we propose a simple and novel framework that combines these two previously mutually-exclusive approaches. Extensive experiments on various tasks demonstrate that our proposed framework alleviates limitations of both approaches, and outperforms existing methods on the MUSE bilingual lexicon induction (BLI) benchmark. We further show that our proposed framework can generalize to contextualized representations and achieves state-of-the-art results on the CoNLL cross-lingual NER benchmark.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a new and simple framework for learning cross-lingual embeddings that, based on well-supported insights, unifies two standard frameworks: joint learning and alignment-based approaches. While I like and acknowledge the fact that the paper examines these frameworks critically and has also some didactic value, I still have some concerns regarding the current experiments, and would like to see some additional analyses in the paper. Honestly, this type of work would work better as a short ACL/EMNLP paper, as the core methodological contribution is a very simple combination of the existing techniques. With joint methods, it is true that shared words can be used as implicit bilingual learning signal which gets propagated further in the model. Even in the case of alignment-based methods, it was shown that this signal can assist in learning shared cross-lingual spaces, but: 1) such spaces are of lower-quality than the spaces learned by relying on seed dictionaries (see e.g., the work of Vulic and Korhonen (ACL 2016)), 2) this is useful only for similar language pairs which use the same script. The paper fails to provide an insightful discussion on how the scores differ when we move towards more distant languages. For instance, English-Chinese is included as a more distant language pair, and the combined method seems to work fine, but the reader is left wondering why that happens. The same is true for English-Russian. The paper should definitely provide more information and insights here. One experiment that would be quite interesting imho is to take seed dictionaries to initialise the joint training method. It will not be unsupervised any more, but it would be very useful to report the differences in results between fully unsupervised joint initialisation (which, as mentioned should work only with more similar languages) and such weakly supervised init: e.g., the method could take all one-to-one translation pairs from the seed dictionary as *shared words*. It would also be interesting to combine such *shared words* and words that are really shared (identically spelled words) as initialisation and measure how it affects the results, also in light of differences in language distance. Adding one such experiment would make the paper more interesting. Some recent strong baselines are missing: e.g., I wonder how the model that does self-learning on top of seed dictionaries (similar to the work of Vulic et al., EMNLP 2019) compares to the proposed method. Also, can we use the same self-learning technique with the combined method here? Would that lead to improved results? Another work I would like to see comparisons to is the work of Zhang et al. (ACL 2019) and despite the fact that the authors explicitly stated that they decided not to compare to the work of Artetxe et al. (ACL 2019) as they see it as concurrent work, I would still like to see that comparison as the model of Artetxe et al. seems very relevant to the presented work. I do not see the extension to contextualized representations (described in Section 3.2) as a real contribution: this is a very straightforward method to apply an alignment-based method on multilingual BERT representations, and very similar techniques have been probed in previous work (e.g., by Aldarmaki & Diab), only the bilingual signal/dictionary came from parallel data instead. Finally, as mentioned before, one of the must-have experiments is including more (distant and diverse) language pairs and analysing the results across such language pairs, with an aim to further understand how the levels of sharing, over-sharing, and non-isomorphism affect performance. Also, while the authors state that reduced isomorphism is the key problem of alignment-based methods, I still fail to see how exactly the alignment refinement step does not suffer from that problem? More discussion is needed here. Other comments: - It would be useful to also point to the survey paper of Ruder et al. (JAIR 2019) where the difference between alignment-based and joint models is described in more detail and could inform an interested reader beyond the confines of the related work section in this paper. Similarly, differences between joint models and alignment-based models have also been analysed by Upadhyay et al. (ACL 2016); Vulic and Korhonen (ACL 2016). - I like how Figure 1 clearly visualizes the main intuitions behind the proposed framework, and how mitigating the oversharing problem leads to better alignments. This was very nice. - In light of the problems with silver standard MUSE datasets (see also the recent work of Kementchedjhieva, EMNLP 2019), I would suggest to run BLI experiments on more language pairs: e.g., there are BLI datasets of Glavas et al available for 28 language pairs.", "role": "assistant" } ]
[ "This paper proposes a new and simple framework for learning cross-lingual embeddings that, based on well-supported insights, unifies two standard frameworks: joint learning and alignment-based approaches.", "While I like and acknowledge the fact that the paper examines these frameworks critically and has also some didactic value, I still have some concerns regarding the current experiments, and would like to see some additional analyses in the paper.", "Honestly, this type of work would work better as a short ACL/EMNLP paper, as the core methodological contribution is a very simple combination of the existing techniques.", "With joint methods, it is true that shared words can be used as implicit bilingual learning signal which gets propagated further in the model.", "Even in the case of alignment-based methods, it was shown that this signal can assist in learning shared cross-lingual spaces, but: 1) such spaces are of lower-quality than the spaces learned by relying on seed dictionaries (see e.g., the work of Vulic and Korhonen (ACL 2016)), 2) this is useful only for similar language pairs which use the same script.", "The paper fails to provide an insightful discussion on how the scores differ when we move towards more distant languages.", "For instance, English-Chinese is included as a more distant language pair, and the combined method seems to work fine, but the reader is left wondering why that happens.", "The same is true for English-Russian.", "The paper should definitely provide more information and insights here.", "One experiment that would be quite interesting imho is to take seed dictionaries to initialise the joint training method.", "It will not be unsupervised any more, but it would be very useful to report the differences in results between fully unsupervised joint initialisation (which, as mentioned should work only with more similar languages) and such weakly supervised init: e.g., the method could take all one-to-one translation pairs from the seed dictionary as *shared words*.", "It would also be interesting to combine such *shared words* and words that are really shared (identically spelled words) as initialisation and measure how it affects the results, also in light of differences in language distance.", "Adding one such experiment would make the paper more interesting.", "Some recent strong baselines are missing: e.g., I wonder how the model that does self-learning on top of seed dictionaries (similar to the work of Vulic et al., EMNLP 2019) compares to the proposed method.", "Also, can we use the same self-learning technique with the combined method here?", "Would that lead to improved results?", "Another work I would like to see comparisons to is the work of Zhang et al.", "(ACL 2019) and despite the fact that the authors explicitly stated that they decided not to compare to the work of Artetxe et al.", "(ACL 2019) as they see it as concurrent work, I would still like to see that comparison as the model of Artetxe et al.", "seems very relevant to the presented work.", "I do not see the extension to contextualized representations (described in Section 3.2) as a real contribution: this is a very straightforward method to apply an alignment-based method on multilingual BERT representations, and very similar techniques have been probed in previous work (e.g., by Aldarmaki & Diab), only the bilingual signal/dictionary came from parallel data instead.", "Finally, as mentioned before, one of the must-have experiments is including more (distant and diverse) language pairs and analysing the results across such language pairs, with an aim to further understand how the levels of sharing, over-sharing, and non-isomorphism affect performance.", "Also, while the authors state that reduced isomorphism is the key problem of alignment-based methods, I still fail to see how exactly the alignment refinement step does not suffer from that problem?", "More discussion is needed here.", "Other comments: - It would be useful to also point to the survey paper of Ruder et al.", "(JAIR 2019) where the difference between alignment-based and joint models is described in more detail and could inform an interested reader beyond the confines of the related work section in this paper.", "Similarly, differences between joint models and alignment-based models have also been analysed by Upadhyay et al.", "(ACL 2016); Vulic and Korhonen (ACL 2016).", "- I like how Figure 1 clearly visualizes the main intuitions behind the proposed framework, and how mitigating the oversharing problem leads to better alignments.", "This was very nice.", "- In light of the problems with silver standard MUSE datasets (see also the recent work of Kementchedjhieva, EMNLP 2019), I would suggest to run BLI experiments on more language pairs: e.g., there are BLI datasets of Glavas et al available for 28 language pairs." ]
{ "criticism": 5, "example": 1, "importance_and_relevance": 7, "materials_and_methods": 20, "praise": 4, "presentation_and_reporting": 9, "results_and_discussion": 7, "suggestion_and_solution": 13, "total": 31 }
0.16129
0.032258
0.225806
0.645161
0.129032
0.290323
0.225806
0.419355
31
1
1
1
1
1
1
1
1
2.129032
-2.984475
5.113507
Uncertainty-guided Continual Learning with Bayesian Neural Networks
Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters* extit{importance}. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB) where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify extit{what to remember} and extit{what to change} as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e.~it does not presume knowledge of which task a sample belongs to.
The authors propose a novel method for continual learning with neural networks based on a Bayesian approach. The idea consists in working with Bayesian neural networks, using the Bayes by back-prop approach in which a factorized Gaussian variational distribution is used to approximate the true posterior. To address the continual learning setting, the authors propose to multiply the learning rate of the mean parameters in the posterior approximation by the corresponding standard deviation parameter in the posterior approximation, while the learning rate for the variance parameters in the posterior approximation is not changed. The authors also consider a version of his method which freezes the mean and variance variational parameters when the signal to noise ratio is high. The proposed method is evaluated in exhaustive experiments, showing state-of-the-art results. Clarity: The paper is clearly written and easy to read. The method proposed is well described and it would be easy to reproduce. Quality: The proposed method is well justified and the experiments performed clearly illustrate the gains with respect to previous methods. Novelty: The proposed method is novel up to my knowledge. The methodological contributions do not seem very sophisticated, but the experiments show that the proposed method, despite being very simple, works very well in practice. Significance: The experiments show that the proposed method achieves state of the art results when compared with a very large number of baselines. This indicates that the proposed method will be relevant to the community. In my opinion, this work is highly significant.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUncertainty-guided Continual Learning with Bayesian Neural Networks\nContinual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters* \textit{importance}. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB) where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify \textit{what to remember} and \textit{what to change} as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e.~it does not presume knowledge of which task a sample belongs to.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a novel method for continual learning with neural networks based on a Bayesian approach. The idea consists in working with Bayesian neural networks, using the Bayes by back-prop approach in which a factorized Gaussian variational distribution is used to approximate the true posterior. To address the continual learning setting, the authors propose to multiply the learning rate of the mean parameters in the posterior approximation by the corresponding standard deviation parameter in the posterior approximation, while the learning rate for the variance parameters in the posterior approximation is not changed. The authors also consider a version of his method which freezes the mean and variance variational parameters when the signal to noise ratio is high. The proposed method is evaluated in exhaustive experiments, showing state-of-the-art results. Clarity: The paper is clearly written and easy to read. The method proposed is well described and it would be easy to reproduce. Quality: The proposed method is well justified and the experiments performed clearly illustrate the gains with respect to previous methods. Novelty: The proposed method is novel up to my knowledge. The methodological contributions do not seem very sophisticated, but the experiments show that the proposed method, despite being very simple, works very well in practice. Significance: The experiments show that the proposed method achieves state of the art results when compared with a very large number of baselines. This indicates that the proposed method will be relevant to the community. In my opinion, this work is highly significant.", "role": "assistant" } ]
[ "The authors propose a novel method for continual learning with neural networks based on a Bayesian approach.", "The idea consists in working with Bayesian neural networks, using the Bayes by back-prop approach in which a factorized Gaussian variational distribution is used to approximate the true posterior.", "To address the continual learning setting, the authors propose to multiply the learning rate of the mean parameters in the posterior approximation by the corresponding standard deviation parameter in the posterior approximation, while the learning rate for the variance parameters in the posterior approximation is not changed.", "The authors also consider a version of his method which freezes the mean and variance variational parameters when the signal to noise ratio is high.", "The proposed method is evaluated in exhaustive experiments, showing state-of-the-art results.", "Clarity: The paper is clearly written and easy to read.", "The method proposed is well described and it would be easy to reproduce.", "Quality: The proposed method is well justified and the experiments performed clearly illustrate the gains with respect to previous methods.", "Novelty: The proposed method is novel up to my knowledge.", "The methodological contributions do not seem very sophisticated, but the experiments show that the proposed method, despite being very simple, works very well in practice.", "Significance: The experiments show that the proposed method achieves state of the art results when compared with a very large number of baselines.", "This indicates that the proposed method will be relevant to the community.", "In my opinion, this work is highly significant." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 11, "praise": 6, "presentation_and_reporting": 2, "results_and_discussion": 5, "suggestion_and_solution": 1, "total": 13 }
0.076923
0
0.307692
0.846154
0.461538
0.153846
0.384615
0.076923
13
1
0
1
1
1
1
1
1
2.307692
2.307692
0
Picking Winning Tickets Before Training by Preserving Gradient Flow
Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels.
This paper introduces a method to prune networks at initialization in a way that (mostly) preserves the gradient flow through the resulting pruned network. This is a direct improvement over previous methods (e.g. SNIP) which have no guarantees that pruned connections will break the gradient flow and thereby harm learning. I quite like this paper, the motivation and results are convincing and it is well presented. The writing is excellent for most of the paper. From section 5 onwards the writing does need quite a bit of editing, as its quality is significantly reduced from what came before. Some detailed comments: - Figure 1 is very nice and really clarifies the idea! - In paragraph below Equation (8): what does *can be computed by backward twice* mean? - Please specify where the equalities in equation (9) are coming from. - Table 3 & 4: Why are the pruning ratios different for each model? - Table 3: Why are values missing for the baseline for 80% and 90%? - Section 5.2: *We observed that, the main bottleneck or pruned... when deriving the pruning criteria*: it*s not clear where this conclusion is coming from. - Table 5 has no batch size results, even though you*re referencing them in the text. And some minor comments to help with the writing: - Intro: *As shown in Dey et al. (2019) that with pre-specified sparsity, they can achieve* would read better as *As shown by Dey et al. (2019), with pre-specified sparsity one can achieve* - Equation (3): Clarify that this is a function of - Sentence below Equation (6): *of the pruned network, and thus our goal* remove the *and thus* - Table 1: Specify that you*re reporting accuracy. - Section 4.1: *e.g. wide ResNet (Zagaruyko & Komodakis, 2016), and thus we can regard* remove the *and thus* - Sentence below equation (9): *encouraging the eigenspace of Theta align* add a *to* before *align* - Sentence before section 5: *it will encourage the eigenspace of the NTK distributing large eigenvalues in the direction of Y, which will in turn accelerates the decrease of the loss (Arora et al., 2019) and benefits to the optimization in A* would read better as *it will encourage the eigenspace of the NTK to distribute large eigenvalues in the direction of Y, which in turn accelerates the decrease of the loss (Arora et al., 2019) and benefits the optimization in A* - Throughout section 5, write it in present tense rather than past tense. e.g. *In this section, we conduct various experiments* instead of *In this section, we conducted various experiments* - Sentence below table 2: you have *the the* - Second paragraph of section 5.1: *We can observe GraSP outperform random pruning clearly* would read better as *We can observe GraSP clearly outperforms random pruning* - Second paragraph of section 5.1: *In the next, we further compared* remove *In the next* - Second paragraph of section 5.1: *Besides, we further experimented with the late resetting* remove *Besides* - Paragraph above section 5.2: *GraSP surpassing SNIP* use *surpasses* instead - Paragraph above section 5.2: *investigate the reasons behind in Section 5.2 for promoting better understanding* would read better as *investigate the reasons behind this in Section 5.2 for obtaining a better understanding* - Section 5.2: *We observed that, the main bottleneck* -> *We observe that the main bottleneck* - Section 5.2: *Besides, we also plotted the the gradient norm of the pruned*, remove *Besides* and the extra *the* - Section 5.2: *the average of the gradients of the entire dataset* use *over the entire dataset* - Section 5.2: *hopefully more training progress can make as evidenced* would read better as *hopefully more training progress can be made as evidenced* - Section 5.3 title would be better using *Visualizing* instead of *Visualize* - Section 5.3: Join the first two sentences with a comma into a single sentence. - Section 5.3: *In contrast, SNIP are more likely* -> In contrast, SNIP is more likely* - Section 5.4: *for ablation study* would read better as *via ablations* - Section 5.4: *we tested GraSP with three different initialization methods;* use a *:* instead of *;* - Section 6: *Besides, readers may notice that*, remove the *Besides* - Section 6: *traditional pruning algorithms while still enjoy the cheaper training cost. As an evidence,* would read better as *traditional pruning algorithms while still enjoying cheaper training costs. As evidence,* - Your citation for Evci et al. (2019) is missing the publication venue/arxiv ID.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPicking Winning Tickets Before Training by Preserving Gradient Flow\nOverparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a method to prune networks at initialization in a way that (mostly) preserves the gradient flow through the resulting pruned network. This is a direct improvement over previous methods (e.g. SNIP) which have no guarantees that pruned connections will break the gradient flow and thereby harm learning. I quite like this paper, the motivation and results are convincing and it is well presented. The writing is excellent for most of the paper. From section 5 onwards the writing does need quite a bit of editing, as its quality is significantly reduced from what came before. Some detailed comments: - Figure 1 is very nice and really clarifies the idea! - In paragraph below Equation (8): what does *can be computed by backward twice* mean? - Please specify where the equalities in equation (9) are coming from. - Table 3 & 4: Why are the pruning ratios different for each model? - Table 3: Why are values missing for the baseline for 80% and 90%? - Section 5.2: *We observed that, the main bottleneck or pruned... when deriving the pruning criteria*: it*s not clear where this conclusion is coming from. - Table 5 has no batch size results, even though you*re referencing them in the text. And some minor comments to help with the writing: - Intro: *As shown in Dey et al. (2019) that with pre-specified sparsity, they can achieve* would read better as *As shown by Dey et al. (2019), with pre-specified sparsity one can achieve* - Equation (3): Clarify that this is a function of - Sentence below Equation (6): *of the pruned network, and thus our goal* remove the *and thus* - Table 1: Specify that you*re reporting accuracy. - Section 4.1: *e.g. wide ResNet (Zagaruyko & Komodakis, 2016), and thus we can regard* remove the *and thus* - Sentence below equation (9): *encouraging the eigenspace of Theta align* add a *to* before *align* - Sentence before section 5: *it will encourage the eigenspace of the NTK distributing large eigenvalues in the direction of Y, which will in turn accelerates the decrease of the loss (Arora et al., 2019) and benefits to the optimization in A* would read better as *it will encourage the eigenspace of the NTK to distribute large eigenvalues in the direction of Y, which in turn accelerates the decrease of the loss (Arora et al., 2019) and benefits the optimization in A* - Throughout section 5, write it in present tense rather than past tense. e.g. *In this section, we conduct various experiments* instead of *In this section, we conducted various experiments* - Sentence below table 2: you have *the the* - Second paragraph of section 5.1: *We can observe GraSP outperform random pruning clearly* would read better as *We can observe GraSP clearly outperforms random pruning* - Second paragraph of section 5.1: *In the next, we further compared* remove *In the next* - Second paragraph of section 5.1: *Besides, we further experimented with the late resetting* remove *Besides* - Paragraph above section 5.2: *GraSP surpassing SNIP* use *surpasses* instead - Paragraph above section 5.2: *investigate the reasons behind in Section 5.2 for promoting better understanding* would read better as *investigate the reasons behind this in Section 5.2 for obtaining a better understanding* - Section 5.2: *We observed that, the main bottleneck* -> *We observe that the main bottleneck* - Section 5.2: *Besides, we also plotted the the gradient norm of the pruned*, remove *Besides* and the extra *the* - Section 5.2: *the average of the gradients of the entire dataset* use *over the entire dataset* - Section 5.2: *hopefully more training progress can make as evidenced* would read better as *hopefully more training progress can be made as evidenced* - Section 5.3 title would be better using *Visualizing* instead of *Visualize* - Section 5.3: Join the first two sentences with a comma into a single sentence. - Section 5.3: *In contrast, SNIP are more likely* -> In contrast, SNIP is more likely* - Section 5.4: *for ablation study* would read better as *via ablations* - Section 5.4: *we tested GraSP with three different initialization methods;* use a *:* instead of *;* - Section 6: *Besides, readers may notice that*, remove the *Besides* - Section 6: *traditional pruning algorithms while still enjoy the cheaper training cost. As an evidence,* would read better as *traditional pruning algorithms while still enjoying cheaper training costs. As evidence,* - Your citation for Evci et al. (2019) is missing the publication venue/arxiv ID.", "role": "assistant" } ]
[ "This paper introduces a method to prune networks at initialization in a way that (mostly) preserves the gradient flow through the resulting pruned network.", "This is a direct improvement over previous methods (e.g.", "SNIP) which have no guarantees that pruned connections will break the gradient flow and thereby harm learning.", "I quite like this paper, the motivation and results are convincing and it is well presented.", "The writing is excellent for most of the paper.", "From section 5 onwards the writing does need quite a bit of editing, as its quality is significantly reduced from what came before.", "Some detailed comments: - Figure 1 is very nice and really clarifies the idea!", "- In paragraph below Equation (8): what does *can be computed by backward twice* mean?", "- Please specify where the equalities in equation (9) are coming from.", "- Table 3 & 4: Why are the pruning ratios different for each model?", "- Table 3: Why are values missing for the baseline for 80% and 90%?", "- Section 5.2: *We observed that, the main bottleneck or pruned... when deriving the pruning criteria*: it*s not clear where this conclusion is coming from.", "- Table 5 has no batch size results, even though you*re referencing them in the text.", "And some minor comments to help with the writing: - Intro: *As shown in Dey et al.", "(2019) that with pre-specified sparsity, they can achieve* would read better as *As shown by Dey et al.", "(2019), with pre-specified sparsity one can achieve* - Equation (3): Clarify that this is a function of - Sentence below Equation (6): *of the pruned network, and thus our goal* remove the *and thus* - Table 1: Specify that you*re reporting accuracy.", "- Section 4.1: *e.g.", "wide ResNet (Zagaruyko & Komodakis, 2016), and thus we can regard* remove the *and thus* - Sentence below equation (9): *encouraging the eigenspace of Theta align* add a *to* before *align* - Sentence before section 5: *it will encourage the eigenspace of the NTK distributing large eigenvalues in the direction of Y, which will in turn accelerates the decrease of the loss (Arora et al., 2019) and benefits to the optimization in A* would read better as *it will encourage the eigenspace of the NTK to distribute large eigenvalues in the direction of Y, which in turn accelerates the decrease of the loss (Arora et al., 2019) and benefits the optimization in A* - Throughout section 5, write it in present tense rather than past tense.", "e.g.", "*In this section, we conduct various experiments* instead of *In this section, we conducted various experiments* - Sentence below table 2: you have *the the* - Second paragraph of section 5.1: *We can observe GraSP outperform random pruning clearly* would read better as *We can observe GraSP clearly outperforms random pruning* - Second paragraph of section 5.1: *In the next, we further compared* remove *In the next* - Second paragraph of section 5.1: *Besides, we further experimented with the late resetting* remove *Besides* - Paragraph above section 5.2: *GraSP surpassing SNIP* use *surpasses* instead - Paragraph above section 5.2: *investigate the reasons behind in Section 5.2 for promoting better understanding* would read better as *investigate the reasons behind this in Section 5.2 for obtaining a better understanding* - Section 5.2: *We observed that, the main bottleneck* -> *We observe that the main bottleneck* - Section 5.2: *Besides, we also plotted the the gradient norm of the pruned*, remove *Besides* and the extra *the* - Section 5.2: *the average of the gradients of the entire dataset* use *over the entire dataset* - Section 5.2: *hopefully more training progress can make as evidenced* would read better as *hopefully more training progress can be made as evidenced* - Section 5.3 title would be better using *Visualizing* instead of *Visualize* - Section 5.3: Join the first two sentences with a comma into a single sentence.", "- Section 5.3: *In contrast, SNIP are more likely* -> In contrast, SNIP is more likely* - Section 5.4: *for ablation study* would read better as *via ablations* - Section 5.4: *we tested GraSP with three different initialization methods;* use a *:* instead of *;* - Section 6: *Besides, readers may notice that*, remove the *Besides* - Section 6: *traditional pruning algorithms while still enjoy the cheaper training cost.", "As an evidence,* would read better as *traditional pruning algorithms while still enjoying cheaper training costs.", "As evidence,* - Your citation for Evci et al.", "(2019) is missing the publication venue/arxiv ID." ]
{ "criticism": 2, "example": 8, "importance_and_relevance": 0, "materials_and_methods": 11, "praise": 3, "presentation_and_reporting": 14, "results_and_discussion": 6, "suggestion_and_solution": 10, "total": 24 }
0.083333
0.333333
0
0.458333
0.125
0.583333
0.25
0.416667
24
1
1
0
1
1
1
1
1
2.25
0.340326
1.909674
Towards neural networks that provably know when they don*t know
It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data. Thus, ReLU networks do not know when they don*t know. However, this is a highly important property in safety critical applications. In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees. In this paper we propose a new approach to OOD which overcomes both problems. Our approach can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates for low confidence predictions in a neighborhood of an out-distribution point. In the experiments we show that state-of-the-art methods fail in this worst-case setting whereas our model can guarantee its performance while retaining state-of-the-art OOD performance.
The authors present a novel approach for OOD detection; in particular their approach comes with worst-case guarantees without compromising on performance. The manuscript is clearly written and I only have some concerns regarding the evaluation. First, while the authors include with MCD an uncertainty-aware training approach, I miss more state-of-the-art methods with substantially better OOD performance, including Evidential Deep Learning (Sensoy et al, NeurIPS 2018) and Deep Ensembles. In particular a comparison to EDL would be interesting, since a similar entropy-encouraging term in the loss function is used during training, resulting is maximum entropy for OOD samples. Second, I would have liked to see precision and recall of the OOD detection task in addition to AUC, allowing a more meaningful/complete comparison between the approaches.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTowards neural networks that provably know when they don*t know\nIt has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data. Thus, ReLU networks do not know when they don*t know. However, this is a highly important property in safety critical applications. In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees. In this paper we propose a new approach to OOD which overcomes both problems. Our approach can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates for low confidence predictions in a neighborhood of an out-distribution point. In the experiments we show that state-of-the-art methods fail in this worst-case setting whereas our model can guarantee its performance while retaining state-of-the-art OOD performance.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present a novel approach for OOD detection; in particular their approach comes with worst-case guarantees without compromising on performance. The manuscript is clearly written and I only have some concerns regarding the evaluation. First, while the authors include with MCD an uncertainty-aware training approach, I miss more state-of-the-art methods with substantially better OOD performance, including Evidential Deep Learning (Sensoy et al, NeurIPS 2018) and Deep Ensembles. In particular a comparison to EDL would be interesting, since a similar entropy-encouraging term in the loss function is used during training, resulting is maximum entropy for OOD samples. Second, I would have liked to see precision and recall of the OOD detection task in addition to AUC, allowing a more meaningful/complete comparison between the approaches.", "role": "assistant" } ]
[ "The authors present a novel approach for OOD detection; in particular their approach comes with worst-case guarantees without compromising on performance.", "The manuscript is clearly written and I only have some concerns regarding the evaluation.", "First, while the authors include with MCD an uncertainty-aware training approach, I miss more state-of-the-art methods with substantially better OOD performance, including Evidential Deep Learning (Sensoy et al, NeurIPS 2018) and Deep Ensembles.", "In particular a comparison to EDL would be interesting, since a similar entropy-encouraging term in the loss function is used during training, resulting is maximum entropy for OOD samples.", "Second, I would have liked to see precision and recall of the OOD detection task in addition to AUC, allowing a more meaningful/complete comparison between the approaches." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 5 }
0.4
0
0.2
1
0.4
0.2
0.4
0.4
5
1
0
1
1
1
1
1
1
3
1.989924
1.010076
HiLLoC: lossless image compression with hierarchical latent variable models
We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based *Bits-Back with ANS* algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.
This paper proposes a method for lossless image compression consisting of a VAE and using a bits-back version of ANS. The results are very impressive on a ImageNet (but maybe not so impressive on the other benchmarks). The authors also discuss how to speed up inference and present some frightening runtime numbers for the serial method, and some better numbers for the vectorized version, though they*re nowhere close to being practical. I think this paper should be accepted. It has a better description of the BB ANS algorithm than I have read before, and it*s a truly interesting direction for the field, despite the lack of immediate applicability. If we are to accept this paper, I suggest the authors put a full description of the neural network used (it*s barely mentioned). I think the authors also need to disclose how long it took to compress an average imagenet image (looking at the runtime numbers for 128x128 pixels is scary, but at least we*d get a better picture on the feasability). Overall, due to the fact that the authors pledge to open source the framework, I think some of the details will be found in the code, once released. I think this is an important step because there are so many details in this paper that one cannot reasonably reproduce the work by simply reading the text of this paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHiLLoC: lossless image compression with hierarchical latent variable models\nWe make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based *Bits-Back with ANS* algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a method for lossless image compression consisting of a VAE and using a bits-back version of ANS. The results are very impressive on a ImageNet (but maybe not so impressive on the other benchmarks). The authors also discuss how to speed up inference and present some frightening runtime numbers for the serial method, and some better numbers for the vectorized version, though they*re nowhere close to being practical. I think this paper should be accepted. It has a better description of the BB ANS algorithm than I have read before, and it*s a truly interesting direction for the field, despite the lack of immediate applicability. If we are to accept this paper, I suggest the authors put a full description of the neural network used (it*s barely mentioned). I think the authors also need to disclose how long it took to compress an average imagenet image (looking at the runtime numbers for 128x128 pixels is scary, but at least we*d get a better picture on the feasability). Overall, due to the fact that the authors pledge to open source the framework, I think some of the details will be found in the code, once released. I think this is an important step because there are so many details in this paper that one cannot reasonably reproduce the work by simply reading the text of this paper.", "role": "assistant" } ]
[ "This paper proposes a method for lossless image compression consisting of a VAE and using a bits-back version of ANS.", "The results are very impressive on a ImageNet (but maybe not so impressive on the other benchmarks).", "The authors also discuss how to speed up inference and present some frightening runtime numbers for the serial method, and some better numbers for the vectorized version, though they*re nowhere close to being practical.", "I think this paper should be accepted.", "It has a better description of the BB ANS algorithm than I have read before, and it*s a truly interesting direction for the field, despite the lack of immediate applicability.", "If we are to accept this paper, I suggest the authors put a full description of the neural network used (it*s barely mentioned).", "I think the authors also need to disclose how long it took to compress an average imagenet image (looking at the runtime numbers for 128x128 pixels is scary, but at least we*d get a better picture on the feasability).", "Overall, due to the fact that the authors pledge to open source the framework, I think some of the details will be found in the code, once released.", "I think this is an important step because there are so many details in this paper that one cannot reasonably reproduce the work by simply reading the text of this paper." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 4, "praise": 3, "presentation_and_reporting": 5, "results_and_discussion": 2, "suggestion_and_solution": 4, "total": 9 }
0.222222
0
0.111111
0.444444
0.333333
0.555556
0.222222
0.444444
9
1
0
1
1
1
1
1
1
2.333333
2.080814
0.252519
IMPACT: Importance Weighted Asynchronous Architectures with Clipped Target Networks
The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time. To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process. However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency). In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA (Espeholt et al., 2018)), sample efficiency drops significantly. To address this, we propose a new distributed reinforcement learning algorithm, IMPACT. IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling. In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA. For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO.
The paper studies a novel way for distributed RL training which combines the data reuse of PPO with the asynchronous updates of IMPALA. The main contribution is the observation that using a target network is necessary for achieving stable learning. I think this is an important result which seems to be validated by another ICLR submission (https://openreview.net/forum?id=SylOlp4FvH). The experimental section could definitely be improved--I was hoping to see more results on Atari or DMLab. Two comments: The V-trace equations (page 3 and 5) don*t mention any clipping of the importance weights c and rho--can you clarify if this is a typo or if you don*t use clipping? It would be great to see experiments showing how learning curves scale with the number of workers. ----------------------------------------------------------------------------------- Thanks for clarifying and for the extra experiments. I*m keeping my score as I still think it*s appropriate.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIMPACT: Importance Weighted Asynchronous Architectures with Clipped Target Networks\nThe practical usage of reinforcement learning agents is often bottlenecked by the duration of training time. To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process. However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency). In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA (Espeholt et al., 2018)), sample efficiency drops significantly. To address this, we propose a new distributed reinforcement learning algorithm, IMPACT. IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling. In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA. For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper studies a novel way for distributed RL training which combines the data reuse of PPO with the asynchronous updates of IMPALA. The main contribution is the observation that using a target network is necessary for achieving stable learning. I think this is an important result which seems to be validated by another ICLR submission (https://openreview.net/forum?id=SylOlp4FvH). The experimental section could definitely be improved--I was hoping to see more results on Atari or DMLab. Two comments: The V-trace equations (page 3 and 5) don*t mention any clipping of the importance weights c and rho--can you clarify if this is a typo or if you don*t use clipping? It would be great to see experiments showing how learning curves scale with the number of workers. ----------------------------------------------------------------------------------- Thanks for clarifying and for the extra experiments. I*m keeping my score as I still think it*s appropriate.", "role": "assistant" } ]
[ "The paper studies a novel way for distributed RL training which combines the data reuse of PPO with the asynchronous updates of IMPALA.", "The main contribution is the observation that using a target network is necessary for achieving stable learning.", "I think this is an important result which seems to be validated by another ICLR submission (https://openreview.net/forum?id=SylOlp4FvH).", "The experimental section could definitely be improved--I was hoping to see more results on Atari or DMLab.", "Two comments: The V-trace equations (page 3 and 5) don*t mention any clipping of the importance weights c and rho--can you clarify if this is a typo or if you don*t use clipping?", "It would be great to see experiments showing how learning curves scale with the number of workers.", "----------------------------------------------------------------------------------- Thanks for clarifying and for the extra experiments.", "I*m keeping my score as I still think it*s appropriate." ]
{ "criticism": 1, "example": 2, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 2, "presentation_and_reporting": 2, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 8 }
0.125
0.25
0.25
0.5
0.25
0.25
0.375
0.375
8
1
1
1
1
1
1
1
1
2.375
1.980439
0.394561
Black-box Off-policy Estimation for Infinite-Horizon Reinforcement Learning
Off-policy estimation for long-horizon problems is important in many real-life applications such as healthcare and robotics, where high-fidelity simulators may not be available and on-policy evaluation is expensive or impossible. Recently, citet{liu18breaking} proposed an approach that avoids the curse of horizon suffered by typical importance-sampling-based methods. While showing promising results, this approach is limited in practice as it requires data being collected by a known behavior policy. In this work, we propose a novel approach that eliminates such limitations. In particular, we formulate the problem as solving for the fixed point of a *backward flow* operator and show that the fixed point solution gives the desired importance ratios of stationary distributions between the target and behavior policies. We analyze its asymptotic consistency and finite-sample generalization. Experiments on benchmarks verify the effectiveness of our proposed approach.
Summary : This paper proposes an approach for long horizon off-policy evaluation, for the case where the behaviour policy may not be known. The main idea of the paper is to propose an estimator for off-policy evaluation, that computes the ratio of the stationary state distributions (similar to recent previous work Liu et al.,) but when the behaviour policy may be unknown. This is an important contribution since in most practical off-policy settings, the behaviour policy used to collect the data may not necessarily be known. Comments and Questions : - This is an important and novel contribution where the authors propose an approach for OPE that comes down to solving a fixed point operator. This is simlar to solving for fixed point Bellman operators, but where their problem can be formulated as a backward recursive problem. - The operator defined as in equation 3 is in terms of the backward view of time - this allows the operator to capture the visitation from the previous s,a to the current s. This is the backward flow operator with which a fixed point equation can be described. Although the authors note that similar operators have appeared in the literature before - their main contribution is in terms of using such operators for the OPE problem, which seems novel to me and is an interesting approach with potential benefits as demonstrated in this paper. - The core idea comes from equation 9 which tries to minimize the discrepancy between the empirical distribution and the stationary distribution. This can be formulated as an optimization problem, and the paper uses blackbox estimators, as described in section 4.2 for solving this problem. - The next interesting part of the paper comes from solving the optimization problem in equation 9 with Maximum Mean Discrepancy (MMD) - this is a popular approach that has recently become well-known, and the authors make use of it minimize the differences between the empirical and the stationary state distribution. - Section 4.2 appears a bit confusiing to me with some details missing - it would perhaps be useful if the authors could include more details for 4.2, especially with some explanations of how they arrived at the expression in equation 10. This would also make the paper more self-contained, for readers in the RL community perhaps not so well-read with literature on MMD. Appendix C contains the detailed derivation, but more intuitive explanations might be useful for the context of the paper. - The proposed black box estimator seems quite useful as demonstrated in figures 2 and 3. Although the authors evaluate their approach of few simple domains - it would have been useful if there were more extensive experiments performed for OPE problems. This would be useful since from fig 2, it appears that the proposed method only outperforms in 3 out of 4 evaluated problems. - For experiments, it would also be useful to demonstrate the significance of not knowing the behaviour policy and what are the usefulness of it. The paper is motived in terms of unknown behaviour policies that generated the data - so few experiments that explicitly shows the benefit of it would perhaps strengthen the paper more. - I am curious to know more about the bias-variance trade-off of the proposed OPE estimator as well. Ie, does the proposed method introduce any bias, or has significance in terms of lower variance for the long horizon problem? Experimentally, would it be possible to demosntrate whether the approach has lower variance compared to existing baselines? Score : - Overall, I think the paper has useful contributions. It is a well written paper, but some additonal details in section 4.2 might be useful, especially on the appriach with MMD. Experimentally, I think there are some experiments missing and doing those can significantly strengthen the paper as well. The proposed method seems simple and elegant, and I would recommend for a weak acceptance of the paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBlack-box Off-policy Estimation for Infinite-Horizon Reinforcement Learning\nOff-policy estimation for long-horizon problems is important in many real-life applications such as healthcare and robotics, where high-fidelity simulators may not be available and on-policy evaluation is expensive or impossible. Recently, citet{liu18breaking} proposed an approach that avoids the curse of horizon suffered by typical importance-sampling-based methods. While showing promising results, this approach is limited in practice as it requires data being collected by a known behavior policy. In this work, we propose a novel approach that eliminates such limitations. In particular, we formulate the problem as solving for the fixed point of a *backward flow* operator and show that the fixed point solution gives the desired importance ratios of stationary distributions between the target and behavior policies. We analyze its asymptotic consistency and finite-sample generalization. Experiments on benchmarks verify the effectiveness of our proposed approach.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary : This paper proposes an approach for long horizon off-policy evaluation, for the case where the behaviour policy may not be known. The main idea of the paper is to propose an estimator for off-policy evaluation, that computes the ratio of the stationary state distributions (similar to recent previous work Liu et al.,) but when the behaviour policy may be unknown. This is an important contribution since in most practical off-policy settings, the behaviour policy used to collect the data may not necessarily be known. Comments and Questions : - This is an important and novel contribution where the authors propose an approach for OPE that comes down to solving a fixed point operator. This is simlar to solving for fixed point Bellman operators, but where their problem can be formulated as a backward recursive problem. - The operator defined as in equation 3 is in terms of the backward view of time - this allows the operator to capture the visitation from the previous s,a to the current s. This is the backward flow operator with which a fixed point equation can be described. Although the authors note that similar operators have appeared in the literature before - their main contribution is in terms of using such operators for the OPE problem, which seems novel to me and is an interesting approach with potential benefits as demonstrated in this paper. - The core idea comes from equation 9 which tries to minimize the discrepancy between the empirical distribution and the stationary distribution. This can be formulated as an optimization problem, and the paper uses blackbox estimators, as described in section 4.2 for solving this problem. - The next interesting part of the paper comes from solving the optimization problem in equation 9 with Maximum Mean Discrepancy (MMD) - this is a popular approach that has recently become well-known, and the authors make use of it minimize the differences between the empirical and the stationary state distribution. - Section 4.2 appears a bit confusiing to me with some details missing - it would perhaps be useful if the authors could include more details for 4.2, especially with some explanations of how they arrived at the expression in equation 10. This would also make the paper more self-contained, for readers in the RL community perhaps not so well-read with literature on MMD. Appendix C contains the detailed derivation, but more intuitive explanations might be useful for the context of the paper. - The proposed black box estimator seems quite useful as demonstrated in figures 2 and 3. Although the authors evaluate their approach of few simple domains - it would have been useful if there were more extensive experiments performed for OPE problems. This would be useful since from fig 2, it appears that the proposed method only outperforms in 3 out of 4 evaluated problems. - For experiments, it would also be useful to demonstrate the significance of not knowing the behaviour policy and what are the usefulness of it. The paper is motived in terms of unknown behaviour policies that generated the data - so few experiments that explicitly shows the benefit of it would perhaps strengthen the paper more. - I am curious to know more about the bias-variance trade-off of the proposed OPE estimator as well. Ie, does the proposed method introduce any bias, or has significance in terms of lower variance for the long horizon problem? Experimentally, would it be possible to demosntrate whether the approach has lower variance compared to existing baselines? Score : - Overall, I think the paper has useful contributions. It is a well written paper, but some additonal details in section 4.2 might be useful, especially on the appriach with MMD. Experimentally, I think there are some experiments missing and doing those can significantly strengthen the paper as well. The proposed method seems simple and elegant, and I would recommend for a weak acceptance of the paper.", "role": "assistant" } ]
[ "Summary : This paper proposes an approach for long horizon off-policy evaluation, for the case where the behaviour policy may not be known.", "The main idea of the paper is to propose an estimator for off-policy evaluation, that computes the ratio of the stationary state distributions (similar to recent previous work Liu et al.,) but when the behaviour policy may be unknown.", "This is an important contribution since in most practical off-policy settings, the behaviour policy used to collect the data may not necessarily be known.", "Comments and Questions : - This is an important and novel contribution where the authors propose an approach for OPE that comes down to solving a fixed point operator.", "This is simlar to solving for fixed point Bellman operators, but where their problem can be formulated as a backward recursive problem.", "- The operator defined as in equation 3 is in terms of the backward view of time - this allows the operator to capture the visitation from the previous s,a to the current s. This is the backward flow operator with which a fixed point equation can be described.", "Although the authors note that similar operators have appeared in the literature before - their main contribution is in terms of using such operators for the OPE problem, which seems novel to me and is an interesting approach with potential benefits as demonstrated in this paper.", "- The core idea comes from equation 9 which tries to minimize the discrepancy between the empirical distribution and the stationary distribution.", "This can be formulated as an optimization problem, and the paper uses blackbox estimators, as described in section 4.2 for solving this problem.", "- The next interesting part of the paper comes from solving the optimization problem in equation 9 with Maximum Mean Discrepancy (MMD) - this is a popular approach that has recently become well-known, and the authors make use of it minimize the differences between the empirical and the stationary state distribution.", "- Section 4.2 appears a bit confusiing to me with some details missing - it would perhaps be useful if the authors could include more details for 4.2, especially with some explanations of how they arrived at the expression in equation 10.", "This would also make the paper more self-contained, for readers in the RL community perhaps not so well-read with literature on MMD.", "Appendix C contains the detailed derivation, but more intuitive explanations might be useful for the context of the paper.", "- The proposed black box estimator seems quite useful as demonstrated in figures 2 and 3.", "Although the authors evaluate their approach of few simple domains - it would have been useful if there were more extensive experiments performed for OPE problems.", "This would be useful since from fig 2, it appears that the proposed method only outperforms in 3 out of 4 evaluated problems.", "- For experiments, it would also be useful to demonstrate the significance of not knowing the behaviour policy and what are the usefulness of it.", "The paper is motived in terms of unknown behaviour policies that generated the data - so few experiments that explicitly shows the benefit of it would perhaps strengthen the paper more.", "- I am curious to know more about the bias-variance trade-off of the proposed OPE estimator as well.", "Ie, does the proposed method introduce any bias, or has significance in terms of lower variance for the long horizon problem?", "Experimentally, would it be possible to demosntrate whether the approach has lower variance compared to existing baselines?", "Score : - Overall, I think the paper has useful contributions.", "It is a well written paper, but some additonal details in section 4.2 might be useful, especially on the appriach with MMD.", "Experimentally, I think there are some experiments missing and doing those can significantly strengthen the paper as well.", "The proposed method seems simple and elegant, and I would recommend for a weak acceptance of the paper." ]
{ "criticism": 1, "example": 4, "importance_and_relevance": 8, "materials_and_methods": 17, "praise": 8, "presentation_and_reporting": 6, "results_and_discussion": 7, "suggestion_and_solution": 8, "total": 25 }
0.04
0.16
0.32
0.68
0.32
0.24
0.28
0.32
25
1
1
1
1
1
1
1
1
2.36
0.08733
2.27267
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly ?nd the optimal network parameters and sparse network structure in a uni?ed optimization process with trainable pruning thresholds. These thresholds can have ?ne-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models. Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence to the effectiveness and ef?ciency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.
## Update after the rebuttal I appreciate the author*s clarification in the rebuttal and the additional result on ImageNet, which addressed some of my concerns. # Summary This paper proposes a trainable mask layer in neural networks for compressing neural networks end-to-end. The main idea is to apply a differentiable mask to individual weights such that the mask itself is also trained through backpropagation. They also propose to add a regularization term that encourages weights are masked out as much as possible. The result on MNIST and CIFAR show that their method can achieve the highest (weight) compression rate and the lowest accuracy reduction compared to baselines. # Originality - The idea of applying trainable mask to weights and regularizing toward masking out is quite interesting and new to my knowledge. # Quality - The performance seems to be good, though it would be more convincing if the paper showed results on larger datasets like ImageNet. - The analysis is interesting, but I am not fully convinced by the *strong evidence to the efficiency and effectiveness of our algorithm*. For example, the final layer*s remaining ratio is constantly 1 in Figure 3, while it starts from nearly 0 in Figure 4. The paper also argues that the final layer was not that important in Figure 4 because the lower layers have not learned useful features. This seems not only contradictory to the result of Figure 3 but also inconsistent of the accuracy being quickly increasing up to near 90% while the remaining ratio is nearly 0 in Figure 4. - If the motivation of the sparse training is to reduce memory consumption AND computation, showing some results on the reduction of the computation cost after sparse training would important to complete the story. # Clarity - The description of the main idea is not clear. - What are *structure gradient* and *performance gradient*? They are not mathematically defined in the paper. - I do not understand how the proposed method can *recover* from pruned connection, although it seems to be indeed happening in the experiment. The paper claims that the use of long-tailed higher-order estimator H(x) makes it possible to recover. However, H(x) still seems to have flat lines where the derivative is 0. Is H(x) in Equation 3 and Figure 2d are showing *derivative* or step function itself? In any cases, I do not see how the gradient flows once a weight is masked out. # Significance - This paper proposes an interesting idea (trainable mask), though I did not fully get how the mask is defined/trained and has a potential to recover after pruning. The analysis of the compression rate throughout training is interesting but does not seem to be fully convincing. It would be stronger if the paper 1) included more results on bigger datasets like ImageNet, 2) described the main idea more clearly, and 3) provided more convincing evidence why the proposed method is effective.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers\nWe present a novel network pruning algorithm called Dynamic Sparse Training that can jointly ?nd the optimal network parameters and sparse network structure in a uni?ed optimization process with trainable pruning thresholds. These thresholds can have ?ne-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models. Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence to the effectiveness and ef?ciency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n## Update after the rebuttal I appreciate the author*s clarification in the rebuttal and the additional result on ImageNet, which addressed some of my concerns. # Summary This paper proposes a trainable mask layer in neural networks for compressing neural networks end-to-end. The main idea is to apply a differentiable mask to individual weights such that the mask itself is also trained through backpropagation. They also propose to add a regularization term that encourages weights are masked out as much as possible. The result on MNIST and CIFAR show that their method can achieve the highest (weight) compression rate and the lowest accuracy reduction compared to baselines. # Originality - The idea of applying trainable mask to weights and regularizing toward masking out is quite interesting and new to my knowledge. # Quality - The performance seems to be good, though it would be more convincing if the paper showed results on larger datasets like ImageNet. - The analysis is interesting, but I am not fully convinced by the *strong evidence to the efficiency and effectiveness of our algorithm*. For example, the final layer*s remaining ratio is constantly 1 in Figure 3, while it starts from nearly 0 in Figure 4. The paper also argues that the final layer was not that important in Figure 4 because the lower layers have not learned useful features. This seems not only contradictory to the result of Figure 3 but also inconsistent of the accuracy being quickly increasing up to near 90% while the remaining ratio is nearly 0 in Figure 4. - If the motivation of the sparse training is to reduce memory consumption AND computation, showing some results on the reduction of the computation cost after sparse training would important to complete the story. # Clarity - The description of the main idea is not clear. - What are *structure gradient* and *performance gradient*? They are not mathematically defined in the paper. - I do not understand how the proposed method can *recover* from pruned connection, although it seems to be indeed happening in the experiment. The paper claims that the use of long-tailed higher-order estimator H(x) makes it possible to recover. However, H(x) still seems to have flat lines where the derivative is 0. Is H(x) in Equation 3 and Figure 2d are showing *derivative* or step function itself? In any cases, I do not see how the gradient flows once a weight is masked out. # Significance - This paper proposes an interesting idea (trainable mask), though I did not fully get how the mask is defined/trained and has a potential to recover after pruning. The analysis of the compression rate throughout training is interesting but does not seem to be fully convincing. It would be stronger if the paper 1) included more results on bigger datasets like ImageNet, 2) described the main idea more clearly, and 3) provided more convincing evidence why the proposed method is effective.", "role": "assistant" } ]
[ "## Update after the rebuttal I appreciate the author*s clarification in the rebuttal and the additional result on ImageNet, which addressed some of my concerns.", "# Summary This paper proposes a trainable mask layer in neural networks for compressing neural networks end-to-end.", "The main idea is to apply a differentiable mask to individual weights such that the mask itself is also trained through backpropagation.", "They also propose to add a regularization term that encourages weights are masked out as much as possible.", "The result on MNIST and CIFAR show that their method can achieve the highest (weight) compression rate and the lowest accuracy reduction compared to baselines.", "# Originality - The idea of applying trainable mask to weights and regularizing toward masking out is quite interesting and new to my knowledge.", "# Quality - The performance seems to be good, though it would be more convincing if the paper showed results on larger datasets like ImageNet.", "- The analysis is interesting, but I am not fully convinced by the *strong evidence to the efficiency and effectiveness of our algorithm*.", "For example, the final layer*s remaining ratio is constantly 1 in Figure 3, while it starts from nearly 0 in Figure 4.", "The paper also argues that the final layer was not that important in Figure 4 because the lower layers have not learned useful features.", "This seems not only contradictory to the result of Figure 3 but also inconsistent of the accuracy being quickly increasing up to near 90% while the remaining ratio is nearly 0 in Figure 4.", "- If the motivation of the sparse training is to reduce memory consumption AND computation, showing some results on the reduction of the computation cost after sparse training would important to complete the story.", "# Clarity - The description of the main idea is not clear.", "- What are *structure gradient* and *performance gradient*?", "They are not mathematically defined in the paper.", "- I do not understand how the proposed method can *recover* from pruned connection, although it seems to be indeed happening in the experiment.", "The paper claims that the use of long-tailed higher-order estimator H(x) makes it possible to recover.", "However, H(x) still seems to have flat lines where the derivative is 0.", "Is H(x) in Equation 3 and Figure 2d are showing *derivative* or step function itself?", "In any cases, I do not see how the gradient flows once a weight is masked out.", "# Significance - This paper proposes an interesting idea (trainable mask), though I did not fully get how the mask is defined/trained and has a potential to recover after pruning.", "The analysis of the compression rate throughout training is interesting but does not seem to be fully convincing.", "It would be stronger if the paper 1) included more results on bigger datasets like ImageNet, 2) described the main idea more clearly, and 3) provided more convincing evidence why the proposed method is effective." ]
{ "criticism": 10, "example": 3, "importance_and_relevance": 4, "materials_and_methods": 12, "praise": 6, "presentation_and_reporting": 6, "results_and_discussion": 10, "suggestion_and_solution": 5, "total": 23 }
0.434783
0.130435
0.173913
0.521739
0.26087
0.26087
0.434783
0.217391
23
1
1
1
1
1
1
1
1
2.434783
0.85654
1.578243
The Gambler*s Problem and Beyond
We analyze the Gambler*s problem, a simple reinforcement learning problem where the gambler has the chance to double or lose their bets until the target is reached. This is an early example introduced in the reinforcement learning textbook by cite{sutton2018reinforcement}, where they mention an interesting pattern of the optimal value function with high-frequency components and repeating non-smooth points but without further investigation. We provide the exact formula for the optimal value function for both the discrete and the continuous case. Though simple as it might seem, the value function is pathological: fractal, self-similar, non-smooth on any interval, zero derivative almost everywhere, and not written as elementary functions. Sharing these properties with the Cantor function, it holds a complexity that has been uncharted thus far. With the analysis, our work could lead insights on improving value function approximation, Q-learning, and gradient-based algorithms in real applications and implementations.
The paper gave a very detailed analysis of the value function of the gamblers problem. In this problem, the agent decides how much to bet in each step, doubling or losing the chosen amount. There is a chance of loss of greater than 0.5 and as a result the agent would like to minimize the number of steps. While one optimal policy is extremely simple (just bet the maximum), this paper shows that the value function is deceptively complicated due to its fractal structure. In particular, the value function is continuous only when there is no discounting and does not have well behaved derivatives. The exposition is clear and well structured. The author argues such behaviors have consequences for valued function based RL such as value iterations or Q-learning, since one of the simplest RL problem has a weird value function. The paper presented both an interesting issue and a clear analysis, and should be accepted. The major issue is that any implications for RL seems unverified. For example, I would be very surprised if a tabular method will not solve the problem given a reasonable update order. The author also did not provide any analysis on how a parameterized method will actually behave, given that is the likely implication. So it is a bit unclear if the pathology is due to taking the continuous limit or if it is present in discrete cases as well. The final part relying on the axiom of choice is too mathy for the problem motivation (since it applies to the rather boring p=0.5 case). The paper would be more valuable to the ML community if that part is replaced with some analysis/evidence of how the issue shows up when actually running one of the solution methods.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Gambler*s Problem and Beyond\nWe analyze the Gambler*s problem, a simple reinforcement learning problem where the gambler has the chance to double or lose their bets until the target is reached. This is an early example introduced in the reinforcement learning textbook by cite{sutton2018reinforcement}, where they mention an interesting pattern of the optimal value function with high-frequency components and repeating non-smooth points but without further investigation. We provide the exact formula for the optimal value function for both the discrete and the continuous case. Though simple as it might seem, the value function is pathological: fractal, self-similar, non-smooth on any interval, zero derivative almost everywhere, and not written as elementary functions. Sharing these properties with the Cantor function, it holds a complexity that has been uncharted thus far. With the analysis, our work could lead insights on improving value function approximation, Q-learning, and gradient-based algorithms in real applications and implementations.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper gave a very detailed analysis of the value function of the gamblers problem. In this problem, the agent decides how much to bet in each step, doubling or losing the chosen amount. There is a chance of loss of greater than 0.5 and as a result the agent would like to minimize the number of steps. While one optimal policy is extremely simple (just bet the maximum), this paper shows that the value function is deceptively complicated due to its fractal structure. In particular, the value function is continuous only when there is no discounting and does not have well behaved derivatives. The exposition is clear and well structured. The author argues such behaviors have consequences for valued function based RL such as value iterations or Q-learning, since one of the simplest RL problem has a weird value function. The paper presented both an interesting issue and a clear analysis, and should be accepted. The major issue is that any implications for RL seems unverified. For example, I would be very surprised if a tabular method will not solve the problem given a reasonable update order. The author also did not provide any analysis on how a parameterized method will actually behave, given that is the likely implication. So it is a bit unclear if the pathology is due to taking the continuous limit or if it is present in discrete cases as well. The final part relying on the axiom of choice is too mathy for the problem motivation (since it applies to the rather boring p=0.5 case). The paper would be more valuable to the ML community if that part is replaced with some analysis/evidence of how the issue shows up when actually running one of the solution methods.", "role": "assistant" } ]
[ "The paper gave a very detailed analysis of the value function of the gamblers problem.", "In this problem, the agent decides how much to bet in each step, doubling or losing the chosen amount.", "There is a chance of loss of greater than 0.5 and as a result the agent would like to minimize the number of steps.", "While one optimal policy is extremely simple (just bet the maximum), this paper shows that the value function is deceptively complicated due to its fractal structure.", "In particular, the value function is continuous only when there is no discounting and does not have well behaved derivatives.", "The exposition is clear and well structured.", "The author argues such behaviors have consequences for valued function based RL such as value iterations or Q-learning, since one of the simplest RL problem has a weird value function.", "The paper presented both an interesting issue and a clear analysis, and should be accepted.", "The major issue is that any implications for RL seems unverified.", "For example, I would be very surprised if a tabular method will not solve the problem given a reasonable update order.", "The author also did not provide any analysis on how a parameterized method will actually behave, given that is the likely implication.", "So it is a bit unclear if the pathology is due to taking the continuous limit or if it is present in discrete cases as well.", "The final part relying on the axiom of choice is too mathy for the problem motivation (since it applies to the rather boring p=0.5 case).", "The paper would be more valuable to the ML community if that part is replaced with some analysis/evidence of how the issue shows up when actually running one of the solution methods." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 9, "praise": 3, "presentation_and_reporting": 2, "results_and_discussion": 7, "suggestion_and_solution": 3, "total": 14 }
0.285714
0
0.142857
0.642857
0.214286
0.142857
0.5
0.214286
14
1
0
1
1
1
1
1
1
2.142857
2.127075
0.015782
Multilingual Alignment of Contextual Word Representations
We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in understanding and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching fully-supervised models for Bulgarian and Greek. Further, using non-contextual and contextual versions of word retrieval, we show that BERT outperforms fastText while being able to distinguish between multiple uses of a word, suggesting that pre-training subsumes word vectors for learning cross-lingual signals. Finally, we use the contextual word retrieval task to gain a better understanding of the strengths and weaknesses of multilingual pre-training.
This paper presents a new method to further align multilingual BERT by learning a transformation to minimize distances in a parallel corpus. I think that this is overall a solid work. Although simple, the proposed method is well-motivated, and the reported results are generally convincing. However, I think that the paper lacks an appropriate comparison with similar methods in the literature, and the separation between the real evaluation in a downstream task (XNLI) and the analysis on a rather artificial contextual word retrieval task (which favors the proposed system) is not clear enough. More concretely, these are the aspects that I think the paper could (and should) improve: - You are not comparing to any baseline using parallel data with contextual embeddings. You should at least compare your method to Schuster et al. (2019) and/or Aldarmaki & Diab (2019), who further align multilingual BERT in a supervised manner as you do, as well as Lample and Conneau (2019), who propose an alternative method to leverage parallel data during the training of multilingual BERT. In fact, while you do improve over multilingual BERT, your results in XNLI are far from the current state-of-the-art, and this is not even mentioned in the paper. - The *contextual word retrieval* task you propose is rather artificial and lacks any practical interest. It is not surprising that your proposed method is strong at it, as this is essentially how you train it (you are even using different subsets of the exact same corpus for train/test). The task is still interesting for analysis -which is in fact one of the main strengths of the paper- but it should be presented as such. Please consider restructuring your paper and moving all these results to the analysis section, where they really belong. - I do not see the point of the *non-contextual word retrieval* task, when you are in fact using the context (the fact that there is only one occurrence per word type doesn*t change that). This task is even more artificial than the *contextual word retrieval* one. Again, it can have some interest as part of the analysis (showing that the gap between aligned fasttext and aligned BERT goes down from table 1 to table 2), but presenting it as a separate task as if it had some value on its own looks wrong. From my point of view, the real *non-contextual word retrieval* task would be bilingual lexicon induction (i.e. dictionary induction), which is more interesting as a task (as the induced dictionaries can have practical applications) and has been widely studied in the literature. - I really dislike the statement that contextual methods are *unequivocally better than non-contextual methods for multilingual tasks* on the basis of the non-contextual word retrieval results. If you want to make such a strong statement, you should at least show that your method is better than non-contextual ones in a task where the latter are known to be strong (i.e. bilingual lexicon induction, see above). However, your comparison is limited to a new task you introduce that clearly favors your own method, and in fact requires using the non-contextual methods in a non-standard way (concatenating the word embeddings with the avg/max/min sentence embeddings). Please either remove this statement or run a fair comparison in bilingual lexicon induction (and preferably do both). - BERT works at the subword level but, from what I understand, your parallel corpus (both for train/test) is aligned at the word level. It is not clear at all how this mismatch in the tokenization is handled. Minor details that did not influence my score: - Calling *fully-supervised* to the *translate-train* system is misleading. Please simply call it *translate-train*. - I assume you want to refer to Figure 3 instead of Figure 2 in Section 5.2
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMultilingual Alignment of Contextual Word Representations\nWe propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in understanding and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching fully-supervised models for Bulgarian and Greek. Further, using non-contextual and contextual versions of word retrieval, we show that BERT outperforms fastText while being able to distinguish between multiple uses of a word, suggesting that pre-training subsumes word vectors for learning cross-lingual signals. Finally, we use the contextual word retrieval task to gain a better understanding of the strengths and weaknesses of multilingual pre-training.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a new method to further align multilingual BERT by learning a transformation to minimize distances in a parallel corpus. I think that this is overall a solid work. Although simple, the proposed method is well-motivated, and the reported results are generally convincing. However, I think that the paper lacks an appropriate comparison with similar methods in the literature, and the separation between the real evaluation in a downstream task (XNLI) and the analysis on a rather artificial contextual word retrieval task (which favors the proposed system) is not clear enough. More concretely, these are the aspects that I think the paper could (and should) improve: - You are not comparing to any baseline using parallel data with contextual embeddings. You should at least compare your method to Schuster et al. (2019) and/or Aldarmaki & Diab (2019), who further align multilingual BERT in a supervised manner as you do, as well as Lample and Conneau (2019), who propose an alternative method to leverage parallel data during the training of multilingual BERT. In fact, while you do improve over multilingual BERT, your results in XNLI are far from the current state-of-the-art, and this is not even mentioned in the paper. - The *contextual word retrieval* task you propose is rather artificial and lacks any practical interest. It is not surprising that your proposed method is strong at it, as this is essentially how you train it (you are even using different subsets of the exact same corpus for train/test). The task is still interesting for analysis -which is in fact one of the main strengths of the paper- but it should be presented as such. Please consider restructuring your paper and moving all these results to the analysis section, where they really belong. - I do not see the point of the *non-contextual word retrieval* task, when you are in fact using the context (the fact that there is only one occurrence per word type doesn*t change that). This task is even more artificial than the *contextual word retrieval* one. Again, it can have some interest as part of the analysis (showing that the gap between aligned fasttext and aligned BERT goes down from table 1 to table 2), but presenting it as a separate task as if it had some value on its own looks wrong. From my point of view, the real *non-contextual word retrieval* task would be bilingual lexicon induction (i.e. dictionary induction), which is more interesting as a task (as the induced dictionaries can have practical applications) and has been widely studied in the literature. - I really dislike the statement that contextual methods are *unequivocally better than non-contextual methods for multilingual tasks* on the basis of the non-contextual word retrieval results. If you want to make such a strong statement, you should at least show that your method is better than non-contextual ones in a task where the latter are known to be strong (i.e. bilingual lexicon induction, see above). However, your comparison is limited to a new task you introduce that clearly favors your own method, and in fact requires using the non-contextual methods in a non-standard way (concatenating the word embeddings with the avg/max/min sentence embeddings). Please either remove this statement or run a fair comparison in bilingual lexicon induction (and preferably do both). - BERT works at the subword level but, from what I understand, your parallel corpus (both for train/test) is aligned at the word level. It is not clear at all how this mismatch in the tokenization is handled. Minor details that did not influence my score: - Calling *fully-supervised* to the *translate-train* system is misleading. Please simply call it *translate-train*. - I assume you want to refer to Figure 3 instead of Figure 2 in Section 5.2", "role": "assistant" } ]
[ "This paper presents a new method to further align multilingual BERT by learning a transformation to minimize distances in a parallel corpus.", "I think that this is overall a solid work.", "Although simple, the proposed method is well-motivated, and the reported results are generally convincing.", "However, I think that the paper lacks an appropriate comparison with similar methods in the literature, and the separation between the real evaluation in a downstream task (XNLI) and the analysis on a rather artificial contextual word retrieval task (which favors the proposed system) is not clear enough.", "More concretely, these are the aspects that I think the paper could (and should) improve: - You are not comparing to any baseline using parallel data with contextual embeddings.", "You should at least compare your method to Schuster et al.", "(2019) and/or Aldarmaki & Diab (2019), who further align multilingual BERT in a supervised manner as you do, as well as Lample and Conneau (2019), who propose an alternative method to leverage parallel data during the training of multilingual BERT.", "In fact, while you do improve over multilingual BERT, your results in XNLI are far from the current state-of-the-art, and this is not even mentioned in the paper.", "- The *contextual word retrieval* task you propose is rather artificial and lacks any practical interest.", "It is not surprising that your proposed method is strong at it, as this is essentially how you train it (you are even using different subsets of the exact same corpus for train/test).", "The task is still interesting for analysis -which is in fact one of the main strengths of the paper- but it should be presented as such.", "Please consider restructuring your paper and moving all these results to the analysis section, where they really belong.", "- I do not see the point of the *non-contextual word retrieval* task, when you are in fact using the context (the fact that there is only one occurrence per word type doesn*t change that).", "This task is even more artificial than the *contextual word retrieval* one.", "Again, it can have some interest as part of the analysis (showing that the gap between aligned fasttext and aligned BERT goes down from table 1 to table 2), but presenting it as a separate task as if it had some value on its own looks wrong.", "From my point of view, the real *non-contextual word retrieval* task would be bilingual lexicon induction (i.e.", "dictionary induction), which is more interesting as a task (as the induced dictionaries can have practical applications) and has been widely studied in the literature.", "- I really dislike the statement that contextual methods are *unequivocally better than non-contextual methods for multilingual tasks* on the basis of the non-contextual word retrieval results.", "If you want to make such a strong statement, you should at least show that your method is better than non-contextual ones in a task where the latter are known to be strong (i.e.", "bilingual lexicon induction, see above).", "However, your comparison is limited to a new task you introduce that clearly favors your own method, and in fact requires using the non-contextual methods in a non-standard way (concatenating the word embeddings with the avg/max/min sentence embeddings).", "Please either remove this statement or run a fair comparison in bilingual lexicon induction (and preferably do both).", "- BERT works at the subword level but, from what I understand, your parallel corpus (both for train/test) is aligned at the word level.", "It is not clear at all how this mismatch in the tokenization is handled.", "Minor details that did not influence my score: - Calling *fully-supervised* to the *translate-train* system is misleading.", "Please simply call it *translate-train*.", "- I assume you want to refer to Figure 3 instead of Figure 2 in Section 5.2" ]
{ "criticism": 9, "example": 1, "importance_and_relevance": 5, "materials_and_methods": 17, "praise": 4, "presentation_and_reporting": 13, "results_and_discussion": 4, "suggestion_and_solution": 8, "total": 27 }
0.333333
0.037037
0.185185
0.62963
0.148148
0.481481
0.148148
0.296296
27
1
1
1
1
1
1
1
1
2.259259
-0.834097
3.093356
The Curious Case of Neural Text Degeneration
Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops. To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for open-ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is the best decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text.
This paper is motivated by an observation that maximization-based decoding approaches such as beam search can lead to incoherent and repetitive sentences when open-ended long-form text generation based on neural language model such as GPT-2 is performed. To solve the problem, this paper proposes a sampling method called Nucleus Sampling. Similar to Top-k sampling, Nucleus Sampling truncates the probability distribution of the words in the vocabulary. Instead of re-normalizing the probabilities for the top-k words, Nucleus Sampling re-normalizes the original probabilities for the words with values above a pre-chosen threshold p. Some quantitative and qualitative results show that the proposed sampling method can generate long-form texts with some nice properties. Pros: The problem addressed in this paper is highly interesting, and the proposed method is simple and intuitive. The paper is well motivated and the method is clearly presented. Extensive quantitative and qualitative experiments are conducted to compare different sampling methods. Cons: 1) Although the raised problem in this paper is interesting, the proposed Nucleus Sampling seems to be a trivial variant of Top-k sampling. With a reasonably large k suitable for different practical problems in question, it is unclear that Nucleus Sampling produces significant advantages over commonly used Top-k sampling. 2) The argued difficulty in choosing k in Top-k sampling is not that different from that of choosing the threshold p in Nucleus Sampling. 3) In section 4.3, the argument that natural language rarely remains in a high-probability zone is questionable. This happens only because our current neural language models are not well-specified for generating long texts and modeling long-range contexts. 4) In section 6.2, the qualitative comparison between Nucleus Sampling and Top-k sampling might be caused by randomness. With a large k, there is no technical barrier that prevents Top-k sampling from generating the sentences produced by Nucleus Sampling. 5) A recent stochastic beam search method based on Gumbel-max-k (Kool, Hoof, and Welling, ICML 2019) should be discussed and compared. In summary, although the studied problem in this paper is highly interesting, the proposed Nucleus Sampling is not technically significant compared to Top-k sampling.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Curious Case of Neural Text Degeneration\nDespite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops. To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for open-ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is the best decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper is motivated by an observation that maximization-based decoding approaches such as beam search can lead to incoherent and repetitive sentences when open-ended long-form text generation based on neural language model such as GPT-2 is performed. To solve the problem, this paper proposes a sampling method called Nucleus Sampling. Similar to Top-k sampling, Nucleus Sampling truncates the probability distribution of the words in the vocabulary. Instead of re-normalizing the probabilities for the top-k words, Nucleus Sampling re-normalizes the original probabilities for the words with values above a pre-chosen threshold p. Some quantitative and qualitative results show that the proposed sampling method can generate long-form texts with some nice properties. Pros: The problem addressed in this paper is highly interesting, and the proposed method is simple and intuitive. The paper is well motivated and the method is clearly presented. Extensive quantitative and qualitative experiments are conducted to compare different sampling methods. Cons: 1) Although the raised problem in this paper is interesting, the proposed Nucleus Sampling seems to be a trivial variant of Top-k sampling. With a reasonably large k suitable for different practical problems in question, it is unclear that Nucleus Sampling produces significant advantages over commonly used Top-k sampling. 2) The argued difficulty in choosing k in Top-k sampling is not that different from that of choosing the threshold p in Nucleus Sampling. 3) In section 4.3, the argument that natural language rarely remains in a high-probability zone is questionable. This happens only because our current neural language models are not well-specified for generating long texts and modeling long-range contexts. 4) In section 6.2, the qualitative comparison between Nucleus Sampling and Top-k sampling might be caused by randomness. With a large k, there is no technical barrier that prevents Top-k sampling from generating the sentences produced by Nucleus Sampling. 5) A recent stochastic beam search method based on Gumbel-max-k (Kool, Hoof, and Welling, ICML 2019) should be discussed and compared. In summary, although the studied problem in this paper is highly interesting, the proposed Nucleus Sampling is not technically significant compared to Top-k sampling.", "role": "assistant" } ]
[ "This paper is motivated by an observation that maximization-based decoding approaches such as beam search can lead to incoherent and repetitive sentences when open-ended long-form text generation based on neural language model such as GPT-2 is performed.", "To solve the problem, this paper proposes a sampling method called Nucleus Sampling.", "Similar to Top-k sampling, Nucleus Sampling truncates the probability distribution of the words in the vocabulary.", "Instead of re-normalizing the probabilities for the top-k words, Nucleus Sampling re-normalizes the original probabilities for the words with values above a pre-chosen threshold p. Some quantitative and qualitative results show that the proposed sampling method can generate long-form texts with some nice properties.", "Pros: The problem addressed in this paper is highly interesting, and the proposed method is simple and intuitive.", "The paper is well motivated and the method is clearly presented.", "Extensive quantitative and qualitative experiments are conducted to compare different sampling methods.", "Cons: 1) Although the raised problem in this paper is interesting, the proposed Nucleus Sampling seems to be a trivial variant of Top-k sampling.", "With a reasonably large k suitable for different practical problems in question, it is unclear that Nucleus Sampling produces significant advantages over commonly used Top-k sampling.", "2) The argued difficulty in choosing k in Top-k sampling is not that different from that of choosing the threshold p in Nucleus Sampling.", "3) In section 4.3, the argument that natural language rarely remains in a high-probability zone is questionable.", "This happens only because our current neural language models are not well-specified for generating long texts and modeling long-range contexts.", "4) In section 6.2, the qualitative comparison between Nucleus Sampling and Top-k sampling might be caused by randomness.", "With a large k, there is no technical barrier that prevents Top-k sampling from generating the sentences produced by Nucleus Sampling.", "5) A recent stochastic beam search method based on Gumbel-max-k (Kool, Hoof, and Welling, ICML 2019) should be discussed and compared.", "In summary, although the studied problem in this paper is highly interesting, the proposed Nucleus Sampling is not technically significant compared to Top-k sampling." ]
{ "criticism": 6, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 15, "praise": 4, "presentation_and_reporting": 5, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 16 }
0.375
0
0.1875
0.9375
0.25
0.3125
0.0625
0.0625
16
1
0
1
1
1
1
1
1
2.1875
2.045458
0.142042
Guiding Program Synthesis by Learning to Generate Examples
A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well. This can be difficult to achieve as the specification provided by the end user is often limited, containing as few as one or two input-output examples. In this paper we address this challenge via an iterative approach that finds ambiguities in the provided specification and learns to resolve these by generating additional input-output examples. The main insight is to reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct. As a result, to train our probabilistic models, we can take advantage of the large amounts of data in the form of program outputs, which are often much easier to obtain than the corresponding ground-truth programs.
= Summary A method for a refinement loop for program synthesizers operating on input/ouput specifications is presented. The core idea is to generate several candidate solutions, execute them on several inputs, and then use a learned component to judge which of the resulting input/output pairs are most likely to be correct. This avoids having to judge the correctness of the generated programs and instead focuses on the easier task of judging the correctness of outputs. An implementation of the idea in a tool for synthesizing programs generating UIs is evaluated, showing impressive improvements over the baseline. = Strong/Weak Points + The idea is surprisingly simple and applies to an important problem in program synthesis. + The experiments show that the method works very well in UI-generation domain - The paper repeatedly claims general applicability to program synthesizers, but is only evaluated in the specific domain of UI-generating programs. I have substantial doubts that the approach would work as well in the domains of, e.g., string manipulation, Karel, or data structure transformations. My doubts are based on the fact that there are easily generalizable rules for UIs (no overlaps, symmetry, ...), whereas other domains are less easily described. This creates a substantial gap between paper claims and empirical results. - The writting is somewhat sloppy (see below), which makes it sometimes hard to understand. Names such as *views* are used without explanation, and it*s not explained how a device is an input to a program (yes, I get what this means, but it makes in unnecessarily hard to follow the paper) = Recommendation I would ask the authors to rewrite their paper to make less general claims, but believe that the general idea of judging the correctness of a program (or policy) by evaluating it on different inputs is a powerful concept that would be of substantial value to the wider ICLR audience. Improving the readability of the paper would make me improve my rating to a full accept. = Minor Comments * page 1, par *Generalization challenge*: The second sentence here is 4 lines long and very hard to follow. Please rephrase. * page 2, par 2: *no large real-word datasets exists* -> exist * page 2, par 3: *even when both optimizations of InferUI are disabled*: at this point, the reader doesn*t know about any optimizations of InferUI. * page 4, par 1: *i.e., * - is undefined here (will be defined later on the page) * page 4, par 2: *Generate a candidate program * - in step 2, there are suddenly also , which are never explicitly generated. Either adapt this step, or explicitly generate them in step 2 based on the distinguishing input * page 7 par 2: *We use one screen dimension as the input specification , the second as the distinguishing input* - this confused me, as the paper discussed discovering the distinguishing input (page 4, paragraph *Finding a distinguishing input*), whereas it sounds here like that input is manually selected.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGuiding Program Synthesis by Learning to Generate Examples\nA key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well. This can be difficult to achieve as the specification provided by the end user is often limited, containing as few as one or two input-output examples. In this paper we address this challenge via an iterative approach that finds ambiguities in the provided specification and learns to resolve these by generating additional input-output examples. The main insight is to reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct. As a result, to train our probabilistic models, we can take advantage of the large amounts of data in the form of program outputs, which are often much easier to obtain than the corresponding ground-truth programs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n= Summary A method for a refinement loop for program synthesizers operating on input/ouput specifications is presented. The core idea is to generate several candidate solutions, execute them on several inputs, and then use a learned component to judge which of the resulting input/output pairs are most likely to be correct. This avoids having to judge the correctness of the generated programs and instead focuses on the easier task of judging the correctness of outputs. An implementation of the idea in a tool for synthesizing programs generating UIs is evaluated, showing impressive improvements over the baseline. = Strong/Weak Points + The idea is surprisingly simple and applies to an important problem in program synthesis. + The experiments show that the method works very well in UI-generation domain - The paper repeatedly claims general applicability to program synthesizers, but is only evaluated in the specific domain of UI-generating programs. I have substantial doubts that the approach would work as well in the domains of, e.g., string manipulation, Karel, or data structure transformations. My doubts are based on the fact that there are easily generalizable rules for UIs (no overlaps, symmetry, ...), whereas other domains are less easily described. This creates a substantial gap between paper claims and empirical results. - The writting is somewhat sloppy (see below), which makes it sometimes hard to understand. Names such as *views* are used without explanation, and it*s not explained how a device is an input to a program (yes, I get what this means, but it makes in unnecessarily hard to follow the paper) = Recommendation I would ask the authors to rewrite their paper to make less general claims, but believe that the general idea of judging the correctness of a program (or policy) by evaluating it on different inputs is a powerful concept that would be of substantial value to the wider ICLR audience. Improving the readability of the paper would make me improve my rating to a full accept. = Minor Comments * page 1, par *Generalization challenge*: The second sentence here is 4 lines long and very hard to follow. Please rephrase. * page 2, par 2: *no large real-word datasets exists* -> exist * page 2, par 3: *even when both optimizations of InferUI are disabled*: at this point, the reader doesn*t know about any optimizations of InferUI. * page 4, par 1: *i.e., * - is undefined here (will be defined later on the page) * page 4, par 2: *Generate a candidate program * - in step 2, there are suddenly also , which are never explicitly generated. Either adapt this step, or explicitly generate them in step 2 based on the distinguishing input * page 7 par 2: *We use one screen dimension as the input specification , the second as the distinguishing input* - this confused me, as the paper discussed discovering the distinguishing input (page 4, paragraph *Finding a distinguishing input*), whereas it sounds here like that input is manually selected.", "role": "assistant" } ]
[ "= Summary A method for a refinement loop for program synthesizers operating on input/ouput specifications is presented.", "The core idea is to generate several candidate solutions, execute them on several inputs, and then use a learned component to judge which of the resulting input/output pairs are most likely to be correct.", "This avoids having to judge the correctness of the generated programs and instead focuses on the easier task of judging the correctness of outputs.", "An implementation of the idea in a tool for synthesizing programs generating UIs is evaluated, showing impressive improvements over the baseline.", "= Strong/Weak Points + The idea is surprisingly simple and applies to an important problem in program synthesis.", "+ The experiments show that the method works very well in UI-generation domain - The paper repeatedly claims general applicability to program synthesizers, but is only evaluated in the specific domain of UI-generating programs.", "I have substantial doubts that the approach would work as well in the domains of, e.g., string manipulation, Karel, or data structure transformations.", "My doubts are based on the fact that there are easily generalizable rules for UIs (no overlaps, symmetry, ...), whereas other domains are less easily described.", "This creates a substantial gap between paper claims and empirical results.", "- The writting is somewhat sloppy (see below), which makes it sometimes hard to understand.", "Names such as *views* are used without explanation, and it*s not explained how a device is an input to a program (yes, I get what this means, but it makes in unnecessarily hard to follow the paper) = Recommendation I would ask the authors to rewrite their paper to make less general claims, but believe that the general idea of judging the correctness of a program (or policy) by evaluating it on different inputs is a powerful concept that would be of substantial value to the wider ICLR audience.", "Improving the readability of the paper would make me improve my rating to a full accept.", "= Minor Comments * page 1, par *Generalization challenge*: The second sentence here is 4 lines long and very hard to follow.", "Please rephrase.", "* page 2, par 2: *no large real-word datasets exists* -> exist * page 2, par 3: *even when both optimizations of InferUI are disabled*: at this point, the reader doesn*t know about any optimizations of InferUI.", "* page 4, par 1: *i.e., * - is undefined here (will be defined later on the page) * page 4, par 2: *Generate a candidate program * - in step 2, there are suddenly also , which are never explicitly generated.", "Either adapt this step, or explicitly generate them in step 2 based on the distinguishing input * page 7 par 2: *We use one screen dimension as the input specification , the second as the distinguishing input* - this confused me, as the paper discussed discovering the distinguishing input (page 4, paragraph *Finding a distinguishing input*), whereas it sounds here like that input is manually selected." ]
{ "criticism": 8, "example": 4, "importance_and_relevance": 4, "materials_and_methods": 8, "praise": 4, "presentation_and_reporting": 8, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 17 }
0.470588
0.235294
0.235294
0.470588
0.235294
0.470588
0.176471
0.235294
17
1
1
1
1
1
1
1
1
2.529412
2.276893
0.252519
Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition
When constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance. Nevertheless, it is typically difficult for existing random forest methods to strike a good balance between these conflicting factors. In this work, we propose a generalized convolutional forest networks to learn a feature space to maximize the strength of individual tree classifiers while minimizing the respective correlation. The feature space is iteratively constructed by a probabilistic triplet sampling method based on the distribution obtained from the splits of the random forest. The sampling process is designed to pull the data of the same label together for higher strength and push away the data frequently falling to the same leaf nodes. We perform extensive experiments on five image classification and two domain generalization datasets with ResNet-50 and DenseNet-161 backbone networks. Experimental results show that the proposed algorithm performs favorably against state-of-the-art methods.
This paper proposes a method for generalized image recognition based on random forest, that use directly the features extracted by the backbone, that assign pseudo-labels to the data. The learning is performed with a triplet loss adapted for better generalization. Decision: weak reject Motivation: the method is incremental and presented in general in a clear way and easy to follow, the authors present a simple but interesting trick to make the triplet loss more effective on a random forest in the case of generalization to a new unlabeled dataset. This method looks incremental to me because it is addressing the problem of pseudo-labelling for learning on a new dataset and instead of using confidence measures uses a random forest to assign labels. The experimental section of the paper is a bit confusing because is not clear if the results presented are with comparable network (e.g. ResNet18) like the cited state-of-the-art papers, from further readings I am confident the autors compared fairly with similar architectures. Authors should perhaps stress they compare with state-of-the-art in fair condition to avoid confusion as in my case. How much is the overhead of building the random forest for each iteration of the learning (algorithm 1), a more detailed analysis on this is useful for understanding the method. Could this method be used to train a network from scratch on an unlabeled data or on data with noisy labels? How did the authors choose the T decision trees, is there any ablation study, general practice or euristics behind the choice of 1,10,50? The comparison with state-of-the-art Tab 3 and Tab 4 shows that for some datasets other techniques are better, did the authors draw some conclusions from that? Comparing Tab 3 and 4 with Tab 5/6/7/8/9 looks like this method can work but only in the case of much bigger network like ResNet50 and DenseNet 161 which can limit its use for high resources (computing power) cases. Replicability: I think with improvements in the experimental section the method results can be replicated. At the moment it lack many details like learning rates, epoch of training and other useful information that are useful. Minor: there are two lines out of the 9 page limit
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGeneralized Convolutional Forest Networks for Domain Generalization and Visual Recognition\nWhen constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance. Nevertheless, it is typically difficult for existing random forest methods to strike a good balance between these conflicting factors. In this work, we propose a generalized convolutional forest networks to learn a feature space to maximize the strength of individual tree classifiers while minimizing the respective correlation. The feature space is iteratively constructed by a probabilistic triplet sampling method based on the distribution obtained from the splits of the random forest. The sampling process is designed to pull the data of the same label together for higher strength and push away the data frequently falling to the same leaf nodes. We perform extensive experiments on five image classification and two domain generalization datasets with ResNet-50 and DenseNet-161 backbone networks. Experimental results show that the proposed algorithm performs favorably against state-of-the-art methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a method for generalized image recognition based on random forest, that use directly the features extracted by the backbone, that assign pseudo-labels to the data. The learning is performed with a triplet loss adapted for better generalization. Decision: weak reject Motivation: the method is incremental and presented in general in a clear way and easy to follow, the authors present a simple but interesting trick to make the triplet loss more effective on a random forest in the case of generalization to a new unlabeled dataset. This method looks incremental to me because it is addressing the problem of pseudo-labelling for learning on a new dataset and instead of using confidence measures uses a random forest to assign labels. The experimental section of the paper is a bit confusing because is not clear if the results presented are with comparable network (e.g. ResNet18) like the cited state-of-the-art papers, from further readings I am confident the autors compared fairly with similar architectures. Authors should perhaps stress they compare with state-of-the-art in fair condition to avoid confusion as in my case. How much is the overhead of building the random forest for each iteration of the learning (algorithm 1), a more detailed analysis on this is useful for understanding the method. Could this method be used to train a network from scratch on an unlabeled data or on data with noisy labels? How did the authors choose the T decision trees, is there any ablation study, general practice or euristics behind the choice of 1,10,50? The comparison with state-of-the-art Tab 3 and Tab 4 shows that for some datasets other techniques are better, did the authors draw some conclusions from that? Comparing Tab 3 and 4 with Tab 5/6/7/8/9 looks like this method can work but only in the case of much bigger network like ResNet50 and DenseNet 161 which can limit its use for high resources (computing power) cases. Replicability: I think with improvements in the experimental section the method results can be replicated. At the moment it lack many details like learning rates, epoch of training and other useful information that are useful. Minor: there are two lines out of the 9 page limit", "role": "assistant" } ]
[ "This paper proposes a method for generalized image recognition based on random forest, that use directly the features extracted by the backbone, that assign pseudo-labels to the data.", "The learning is performed with a triplet loss adapted for better generalization.", "Decision: weak reject Motivation: the method is incremental and presented in general in a clear way and easy to follow, the authors present a simple but interesting trick to make the triplet loss more effective on a random forest in the case of generalization to a new unlabeled dataset.", "This method looks incremental to me because it is addressing the problem of pseudo-labelling for learning on a new dataset and instead of using confidence measures uses a random forest to assign labels.", "The experimental section of the paper is a bit confusing because is not clear if the results presented are with comparable network (e.g.", "ResNet18) like the cited state-of-the-art papers, from further readings I am confident the autors compared fairly with similar architectures.", "Authors should perhaps stress they compare with state-of-the-art in fair condition to avoid confusion as in my case.", "How much is the overhead of building the random forest for each iteration of the learning (algorithm 1), a more detailed analysis on this is useful for understanding the method.", "Could this method be used to train a network from scratch on an unlabeled data or on data with noisy labels?", "How did the authors choose the T decision trees, is there any ablation study, general practice or euristics behind the choice of 1,10,50?", "The comparison with state-of-the-art Tab 3 and Tab 4 shows that for some datasets other techniques are better, did the authors draw some conclusions from that?", "Comparing Tab 3 and 4 with Tab 5/6/7/8/9 looks like this method can work but only in the case of much bigger network like ResNet50 and DenseNet 161 which can limit its use for high resources (computing power) cases.", "Replicability: I think with improvements in the experimental section the method results can be replicated.", "At the moment it lack many details like learning rates, epoch of training and other useful information that are useful.", "Minor: there are two lines out of the 9 page limit" ]
{ "criticism": 2, "example": 4, "importance_and_relevance": 3, "materials_and_methods": 12, "praise": 1, "presentation_and_reporting": 4, "results_and_discussion": 6, "suggestion_and_solution": 3, "total": 15 }
0.133333
0.266667
0.2
0.8
0.066667
0.266667
0.4
0.2
15
1
1
1
1
1
1
1
1
2.333333
2.270204
0.06313
Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition
When constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance. Nevertheless, it is typically difficult for existing random forest methods to strike a good balance between these conflicting factors. In this work, we propose a generalized convolutional forest networks to learn a feature space to maximize the strength of individual tree classifiers while minimizing the respective correlation. The feature space is iteratively constructed by a probabilistic triplet sampling method based on the distribution obtained from the splits of the random forest. The sampling process is designed to pull the data of the same label together for higher strength and push away the data frequently falling to the same leaf nodes. We perform extensive experiments on five image classification and two domain generalization datasets with ResNet-50 and DenseNet-161 backbone networks. Experimental results show that the proposed algorithm performs favorably against state-of-the-art methods.
The paper proposes a scheme to learn representations and a random forest model in a closely coupled manner, where the representations are learned to simultaneously improve the predictive performance of the individual trees in the random forest and reduce the correlation between these trees. This is achieved via a novel triplet sampling scheme where the positive samples are used to improve the predictive performance of the individual trees, while the negative samples are used to reduce the correlation between the trees. The empirical results indicate the improvement of the proposed scheme over random forests learned over the original representations across various datasets. The manuscript is well presented and I am leaning towards accept, one major issue with this manuscript is the number of pages which crosses the 8 page recommended limit in the ICLR CfP. Given that higher standard, I feel that there are multiple minor issues which should be addressed. - It is not clear why the proposed scheme is especially well suited for *domain generalization* and *visual recognition* and not for general supervised classification tasks (like those on CIFAR, SVHN, etc). This needs to be clarified. - The theoretical results for random forest rely on the independence of the individual trees. Given that the trees in the random forest are no longer independent in the proposed scheme, the upper bound in Equation (1) may no longer be valid. While this does not affect the empirical performance, it might be good to discuss the theoretical implications of the proposed scheme. - It is not clear why the curves for the canonical features in Figure 3 not improving with the number of iterations (which corresponds to number of trees to the best of my understanding). The results in Table 1 do indicate improvement in the performance with canonical features, putting Figure 3 and Table 1 at odds. Minor: - The individual trees in a random forest are supposed to have low bias but high variance, and the heterogeneity between the trees is supposed to squash the variance with the bagging. The introduction mentions that the individual trees in the forest are *less biased*. Maybe this is because of our different definitions of bias but any clarification here would be helpful.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGeneralized Convolutional Forest Networks for Domain Generalization and Visual Recognition\nWhen constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance. Nevertheless, it is typically difficult for existing random forest methods to strike a good balance between these conflicting factors. In this work, we propose a generalized convolutional forest networks to learn a feature space to maximize the strength of individual tree classifiers while minimizing the respective correlation. The feature space is iteratively constructed by a probabilistic triplet sampling method based on the distribution obtained from the splits of the random forest. The sampling process is designed to pull the data of the same label together for higher strength and push away the data frequently falling to the same leaf nodes. We perform extensive experiments on five image classification and two domain generalization datasets with ResNet-50 and DenseNet-161 backbone networks. Experimental results show that the proposed algorithm performs favorably against state-of-the-art methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a scheme to learn representations and a random forest model in a closely coupled manner, where the representations are learned to simultaneously improve the predictive performance of the individual trees in the random forest and reduce the correlation between these trees. This is achieved via a novel triplet sampling scheme where the positive samples are used to improve the predictive performance of the individual trees, while the negative samples are used to reduce the correlation between the trees. The empirical results indicate the improvement of the proposed scheme over random forests learned over the original representations across various datasets. The manuscript is well presented and I am leaning towards accept, one major issue with this manuscript is the number of pages which crosses the 8 page recommended limit in the ICLR CfP. Given that higher standard, I feel that there are multiple minor issues which should be addressed. - It is not clear why the proposed scheme is especially well suited for *domain generalization* and *visual recognition* and not for general supervised classification tasks (like those on CIFAR, SVHN, etc). This needs to be clarified. - The theoretical results for random forest rely on the independence of the individual trees. Given that the trees in the random forest are no longer independent in the proposed scheme, the upper bound in Equation (1) may no longer be valid. While this does not affect the empirical performance, it might be good to discuss the theoretical implications of the proposed scheme. - It is not clear why the curves for the canonical features in Figure 3 not improving with the number of iterations (which corresponds to number of trees to the best of my understanding). The results in Table 1 do indicate improvement in the performance with canonical features, putting Figure 3 and Table 1 at odds. Minor: - The individual trees in a random forest are supposed to have low bias but high variance, and the heterogeneity between the trees is supposed to squash the variance with the bagging. The introduction mentions that the individual trees in the forest are *less biased*. Maybe this is because of our different definitions of bias but any clarification here would be helpful.", "role": "assistant" } ]
[ "The paper proposes a scheme to learn representations and a random forest model in a closely coupled manner, where the representations are learned to simultaneously improve the predictive performance of the individual trees in the random forest and reduce the correlation between these trees.", "This is achieved via a novel triplet sampling scheme where the positive samples are used to improve the predictive performance of the individual trees, while the negative samples are used to reduce the correlation between the trees.", "The empirical results indicate the improvement of the proposed scheme over random forests learned over the original representations across various datasets.", "The manuscript is well presented and I am leaning towards accept, one major issue with this manuscript is the number of pages which crosses the 8 page recommended limit in the ICLR CfP.", "Given that higher standard, I feel that there are multiple minor issues which should be addressed.", "- It is not clear why the proposed scheme is especially well suited for *domain generalization* and *visual recognition* and not for general supervised classification tasks (like those on CIFAR, SVHN, etc).", "This needs to be clarified.", "- The theoretical results for random forest rely on the independence of the individual trees.", "Given that the trees in the random forest are no longer independent in the proposed scheme, the upper bound in Equation (1) may no longer be valid.", "While this does not affect the empirical performance, it might be good to discuss the theoretical implications of the proposed scheme.", "- It is not clear why the curves for the canonical features in Figure 3 not improving with the number of iterations (which corresponds to number of trees to the best of my understanding).", "The results in Table 1 do indicate improvement in the performance with canonical features, putting Figure 3 and Table 1 at odds.", "Minor: - The individual trees in a random forest are supposed to have low bias but high variance, and the heterogeneity between the trees is supposed to squash the variance with the bagging.", "The introduction mentions that the individual trees in the forest are *less biased*.", "Maybe this is because of our different definitions of bias but any clarification here would be helpful." ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 10, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 9, "suggestion_and_solution": 4, "total": 15 }
0.266667
0.066667
0.066667
0.666667
0.066667
0.2
0.6
0.266667
15
1
1
1
1
1
1
1
1
2.2
2.13687
0.06313
PCMC-Net: Feature-based Pairwise Choice Markov Chains
Pairwise Choice Markov Chains (PCMC) have been recently introduced to overcome limitations of choice models based on traditional axioms unable to express empirical observations from modern behavior economics like framing effects and asymmetric dominance. The inference approach that estimates the transition rates between each possible pair of alternatives via maximum likelihood suffers when the examples of each alternative are scarce and is inappropriate when new alternatives can be observed at test time. In this work, we propose an amortized inference approach for PCMC by embedding its definition into a neural network that represents transition rates as a function of the alternatives’ and individual’s features. We apply our construction to the complex case of airline itinerary booking where singletons are common (due to varying prices and individual-specific itineraries), and asymmetric dominance and behaviors strongly dependent on market segments are observed. Experiments show our network significantly outperforming, in terms of prediction accuracy and logarithmic loss, feature engineered standard and latent class Multinomial Logit models as well as recent machine learning approaches.
1.The goal of the paper is to connect flexible choice modeling with a modern approach to ML architecture to make said choice modeling scalable, tractable, and practical. 2. The approach of the paper is well motivated intuitively, but could more explicitly show that PCMC-Net is needed to fix inferential problems with PCMC and that e.g. SGD and some regularization + the linear parameterization suggested by the original PCMC authors isn*t scalable in itself. 3. The approximation theorem is useful and clean, and the empirical results are intriguing. While consideration of more datasets would improve the results, the metrics and baselines considered demonstrate a considerable empirical case for this method. My *weak accept* decision is closer to *accept* than *weak reject.* (Edit 11/25/19: I raised my score in to accept in conjunction with the author*s improvements in the open discussion phase) Improvement areas(all relatively minor): - While I personally enjoy the choice axioms focused on by the PCMC model and this paper, stochastic transitivity, IIA, and regularity are probably more important to emphasize than Contractibility. Because the properties of UE and contractibility were not used, it may be more appropriate to use this space to introduce more of the literature on neural-nets-as-feature-embeddings stuff. - This paper could be improved by generalizing to a few other choice models- in particular the CDM (https://arxiv.org/abs/1902.03266) may be a good candidate for your method. This is more a suggestion for future work if you expand this promising initial result. - Hyper-parameter tuning: I noticed that several of your hyper parameters were set to extremal values for the ranges you considered. If you tuned the other algorithms* hyper parameters the same way, it could be the case that the relative performance is explained by the appropriateness of those ranges. Would be interesting to have a more in-depth treatment of this, but I do understand that it*s a lot of work. Specific Notes: Theorem 1 is nice, and the proof is clean, but doesn*t explicitly note that a PCMC model jointly specifies a family of distributions pi_S for each S in 2^U obtained by subsetting a single rate matrix Q indexed by U. It*s clear that PCMC-Net will still approximate under this definition, as hat q_ij approximates each q_ij because hat q_ij doesn*t depend on S. While the more explicit statement is true with the same logic in the theorem, the notational choice to have *X_i* represent the *i-th* element in S is confusing at first, as e.g. X_1 is a different feature vector for S = {2,3} and S={1,3}. I don*t see this issue as disqualifying, but it took me a while to realize that there wasn*t more than a notational abuse problem when I returned to the definitions where the indexing depended on the set S under consideration. Typos/small concerns: -Above equation (1), the number of parameters in Q_S is |S|(|S|-1) rather than (|S|-1)^2, as each of the |S| alternatives has a transition rate to the other |S|-1 alternatives. -Below equation (3), I think you mean j in S rather than 1<= j <= |S|, as S may not be {1,2,...,|S|}. Later I noticed that you always index S with {1,dots,|S|}, but using i in S in combination with 1<=j<=|S| was a bit confusing. -X_i as the i-th element of S is a bit of an abuse of notation, as it surpasses dependence on S -In Figure 1, you show X_0 in a vector that is referred to as *S.* It is my understanding that X_0 represents user features. As the user is not in the set, this is confusing. The use of a vertical ellipsis to connect ho(X_0) to ho(X_1) is also confusion, as ho(X_1) is input into the Cartesian product while X_0 is input into the direct sum. Overall, nice job! Really enjoyed the paper and approach, good to see connections made between these literatures so that progress in discrete choice can be used at scale.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPCMC-Net: Feature-based Pairwise Choice Markov Chains\nPairwise Choice Markov Chains (PCMC) have been recently introduced to overcome limitations of choice models based on traditional axioms unable to express empirical observations from modern behavior economics like framing effects and asymmetric dominance. The inference approach that estimates the transition rates between each possible pair of alternatives via maximum likelihood suffers when the examples of each alternative are scarce and is inappropriate when new alternatives can be observed at test time. In this work, we propose an amortized inference approach for PCMC by embedding its definition into a neural network that represents transition rates as a function of the alternatives’ and individual’s features. We apply our construction to the complex case of airline itinerary booking where singletons are common (due to varying prices and individual-specific itineraries), and asymmetric dominance and behaviors strongly dependent on market segments are observed. Experiments show our network significantly outperforming, in terms of prediction accuracy and logarithmic loss, feature engineered standard and latent class Multinomial Logit models as well as recent machine learning approaches.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n1.The goal of the paper is to connect flexible choice modeling with a modern approach to ML architecture to make said choice modeling scalable, tractable, and practical. 2. The approach of the paper is well motivated intuitively, but could more explicitly show that PCMC-Net is needed to fix inferential problems with PCMC and that e.g. SGD and some regularization + the linear parameterization suggested by the original PCMC authors isn*t scalable in itself. 3. The approximation theorem is useful and clean, and the empirical results are intriguing. While consideration of more datasets would improve the results, the metrics and baselines considered demonstrate a considerable empirical case for this method. My *weak accept* decision is closer to *accept* than *weak reject.* (Edit 11/25/19: I raised my score in to accept in conjunction with the author*s improvements in the open discussion phase) Improvement areas(all relatively minor): - While I personally enjoy the choice axioms focused on by the PCMC model and this paper, stochastic transitivity, IIA, and regularity are probably more important to emphasize than Contractibility. Because the properties of UE and contractibility were not used, it may be more appropriate to use this space to introduce more of the literature on neural-nets-as-feature-embeddings stuff. - This paper could be improved by generalizing to a few other choice models- in particular the CDM (https://arxiv.org/abs/1902.03266) may be a good candidate for your method. This is more a suggestion for future work if you expand this promising initial result. - Hyper-parameter tuning: I noticed that several of your hyper parameters were set to extremal values for the ranges you considered. If you tuned the other algorithms* hyper parameters the same way, it could be the case that the relative performance is explained by the appropriateness of those ranges. Would be interesting to have a more in-depth treatment of this, but I do understand that it*s a lot of work. Specific Notes: Theorem 1 is nice, and the proof is clean, but doesn*t explicitly note that a PCMC model jointly specifies a family of distributions pi_S for each S in 2^U obtained by subsetting a single rate matrix Q indexed by U. It*s clear that PCMC-Net will still approximate under this definition, as hat q_ij approximates each q_ij because hat q_ij doesn*t depend on S. While the more explicit statement is true with the same logic in the theorem, the notational choice to have *X_i* represent the *i-th* element in S is confusing at first, as e.g. X_1 is a different feature vector for S = {2,3} and S={1,3}. I don*t see this issue as disqualifying, but it took me a while to realize that there wasn*t more than a notational abuse problem when I returned to the definitions where the indexing depended on the set S under consideration. Typos/small concerns: -Above equation (1), the number of parameters in Q_S is |S|(|S|-1) rather than (|S|-1)^2, as each of the |S| alternatives has a transition rate to the other |S|-1 alternatives. -Below equation (3), I think you mean j in S rather than 1<= j <= |S|, as S may not be {1,2,...,|S|}. Later I noticed that you always index S with {1,dots,|S|}, but using i in S in combination with 1<=j<=|S| was a bit confusing. -X_i as the i-th element of S is a bit of an abuse of notation, as it surpasses dependence on S -In Figure 1, you show X_0 in a vector that is referred to as *S.* It is my understanding that X_0 represents user features. As the user is not in the set, this is confusing. The use of a vertical ellipsis to connect \rho(X_0) to \rho(X_1) is also confusion, as \rho(X_1) is input into the Cartesian product while X_0 is input into the direct sum. Overall, nice job! Really enjoyed the paper and approach, good to see connections made between these literatures so that progress in discrete choice can be used at scale.", "role": "assistant" } ]
[ "1.The goal of the paper is to connect flexible choice modeling with a modern approach to ML architecture to make said choice modeling scalable, tractable, and practical.", "2.", "The approach of the paper is well motivated intuitively, but could more explicitly show that PCMC-Net is needed to fix inferential problems with PCMC and that e.g.", "SGD and some regularization + the linear parameterization suggested by the original PCMC authors isn*t scalable in itself.", "3.", "The approximation theorem is useful and clean, and the empirical results are intriguing.", "While consideration of more datasets would improve the results, the metrics and baselines considered demonstrate a considerable empirical case for this method.", "My *weak accept* decision is closer to *accept* than *weak reject.", "* (Edit 11/25/19: I raised my score in to accept in conjunction with the author*s improvements in the open discussion phase) Improvement areas(all relatively minor): - While I personally enjoy the choice axioms focused on by the PCMC model and this paper, stochastic transitivity, IIA, and regularity are probably more important to emphasize than Contractibility.", "Because the properties of UE and contractibility were not used, it may be more appropriate to use this space to introduce more of the literature on neural-nets-as-feature-embeddings stuff.", "- This paper could be improved by generalizing to a few other choice models- in particular the CDM (https://arxiv.org/abs/1902.03266) may be a good candidate for your method.", "This is more a suggestion for future work if you expand this promising initial result.", "- Hyper-parameter tuning: I noticed that several of your hyper parameters were set to extremal values for the ranges you considered.", "If you tuned the other algorithms* hyper parameters the same way, it could be the case that the relative performance is explained by the appropriateness of those ranges.", "Would be interesting to have a more in-depth treatment of this, but I do understand that it*s a lot of work.", "Specific Notes: Theorem 1 is nice, and the proof is clean, but doesn*t explicitly note that a PCMC model jointly specifies a family of distributions pi_S for each S in 2^U obtained by subsetting a single rate matrix Q indexed by U.", "It*s clear that PCMC-Net will still approximate under this definition, as hat q_ij approximates each q_ij because hat q_ij doesn*t depend on S. While the more explicit statement is true with the same logic in the theorem, the notational choice to have *X_i* represent the *i-th* element in S is confusing at first, as e.g.", "X_1 is a different feature vector for S = {2,3} and S={1,3}.", "I don*t see this issue as disqualifying, but it took me a while to realize that there wasn*t more than a notational abuse problem when I returned to the definitions where the indexing depended on the set S under consideration.", "Typos/small concerns: -Above equation (1), the number of parameters in Q_S is |S|(|S|-1) rather than (|S|-1)^2, as each of the |S| alternatives has a transition rate to the other |S|-1 alternatives.", "-Below equation (3), I think you mean j in S rather than 1<= j <= |S|, as S may not be {1,2,...,|S|}.", "Later I noticed that you always index S with {1,dots,|S|}, but using i in S in combination with 1<=j<=|S| was a bit confusing.", "-X_i as the i-th element of S is a bit of an abuse of notation, as it surpasses dependence on S -In Figure 1, you show X_0 in a vector that is referred to as *S.* It is my understanding that X_0 represents user features.", "As the user is not in the set, this is confusing.", "The use of a vertical ellipsis to connect \rho(X_0) to \rho(X_1) is also confusion, as \rho(X_1) is input into the Cartesian product while X_0 is input into the direct sum.", "Overall, nice job!", "Really enjoyed the paper and approach, good to see connections made between these literatures so that progress in discrete choice can be used at scale." ]
{ "criticism": 8, "example": 2, "importance_and_relevance": 6, "materials_and_methods": 12, "praise": 8, "presentation_and_reporting": 8, "results_and_discussion": 7, "suggestion_and_solution": 9, "total": 27 }
0.296296
0.074074
0.222222
0.444444
0.296296
0.296296
0.259259
0.333333
27
1
1
1
1
1
1
1
1
2.222222
-0.871134
3.093356
Towards Verified Robustness under Text Deletion Interventions
Neural networks are widely used in Natural Language Processing, yet despite their empirical successes, their behaviour is brittle: they are both over-sensitive to small input changes, and under-sensitive to deletions of large fractions of input text. This paper aims to tackle under-sensitivity in the context of natural language inference by ensuring that models do not become more confident in their predictions as arbitrary subsets of words from the input text are deleted. We develop a novel technique for formal verification of this specification for models based on the popular decomposable attention mechanism by employing the efficient yet effective interval bound propagation (IBP) approach. Using this method we can efficiently prove, given a model, whether a particular sample is free from the under-sensitivity problem. We compare different training methods to address under-sensitivity, and compare metrics to measure it. In our experiments on the SNLI and MNLI datasets, we observe that IBP training leads to a significantly improved verified accuracy. On the SNLI test set, we can verify 18.4% of samples, a substantial improvement over only 2.8% using standard training.
This work is an application of interval bound propagation on evaluating the robustness of NLI model. This work is well-motivated, assuming that the confidence of a neural model should be lower when part of the sentence is missed. However, the application of vanilla IBP is quite limited in certain model architectures. In this work, the author considers specifically the decomposable attention model, which is a very shallow network, and not a state-of-the-art model anymore. It is non-trivial to adapt the proposed method to other more advanced models, such as the ones based on the Transformer model. Hence, this work does not make enough contribution to be accepted.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTowards Verified Robustness under Text Deletion Interventions\nNeural networks are widely used in Natural Language Processing, yet despite their empirical successes, their behaviour is brittle: they are both over-sensitive to small input changes, and under-sensitive to deletions of large fractions of input text. This paper aims to tackle under-sensitivity in the context of natural language inference by ensuring that models do not become more confident in their predictions as arbitrary subsets of words from the input text are deleted. We develop a novel technique for formal verification of this specification for models based on the popular decomposable attention mechanism by employing the efficient yet effective interval bound propagation (IBP) approach. Using this method we can efficiently prove, given a model, whether a particular sample is free from the under-sensitivity problem. We compare different training methods to address under-sensitivity, and compare metrics to measure it. In our experiments on the SNLI and MNLI datasets, we observe that IBP training leads to a significantly improved verified accuracy. On the SNLI test set, we can verify 18.4% of samples, a substantial improvement over only 2.8% using standard training.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work is an application of interval bound propagation on evaluating the robustness of NLI model. This work is well-motivated, assuming that the confidence of a neural model should be lower when part of the sentence is missed. However, the application of vanilla IBP is quite limited in certain model architectures. In this work, the author considers specifically the decomposable attention model, which is a very shallow network, and not a state-of-the-art model anymore. It is non-trivial to adapt the proposed method to other more advanced models, such as the ones based on the Transformer model. Hence, this work does not make enough contribution to be accepted.", "role": "assistant" } ]
[ "This work is an application of interval bound propagation on evaluating the robustness of NLI model.", "This work is well-motivated, assuming that the confidence of a neural model should be lower when part of the sentence is missed.", "However, the application of vanilla IBP is quite limited in certain model architectures.", "In this work, the author considers specifically the decomposable attention model, which is a very shallow network, and not a state-of-the-art model anymore.", "It is non-trivial to adapt the proposed method to other more advanced models, such as the ones based on the Transformer model.", "Hence, this work does not make enough contribution to be accepted." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 6 }
0.5
0
0.333333
0.833333
0.166667
0.166667
0.166667
0
6
1
0
1
1
1
1
1
0
2.166667
1.393328
0.773339
Abstract Diagrammatic Reasoning with Multiplex Graph Networks
Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems. In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks. MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks. MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels. MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates. We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM). For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin.
This paper proposes using a new version of graph networks – multiplex graph networks – which do object representation followed by some form of graph processing and reasoning to answer *IQ test* style diagrammatic reasoning, in particular including Raven Progressive Matrices that have been previously studied (a little). The paper shows very strong results on multiple datasets, much stronger than previous results (from strong groups) on these datasets. On these grounds, I believe the paper should be accepted. However, the structure and writing of the paper was very frustrating to me. The paper just didn*t make much of an attempt to explain and then motivate/analyze the model used. I mean, if I were writing the paper, I would have considered and done many things, such as: - shortening the introduction - shortening the related work - making the presentation of the datasets more succinct - having only one figure that covers most of what is currently in figures 1 and 2 - putting details of what seem more ancillary details like the treatment of background lines objects in an appendix - remove Figure 3, which didn*t convey much to me in the absence of more careful explanation of the model. so that I could motivate, carefully explain, and evaluate the main model in the paper. But here, all these things fill the main text, and we*re told that we have to read the appendices to understand the model.... And the presentation in the appendix is more a dump-all-the-facts presentation than a careful development of the design. Nevertheless, the general direction of the architecture seems sound, and the results look very strong, and there are even some useful ablations in the appendix.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAbstract Diagrammatic Reasoning with Multiplex Graph Networks\nAbstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems. In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks. MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks. MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels. MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates. We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM). For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes using a new version of graph networks – multiplex graph networks – which do object representation followed by some form of graph processing and reasoning to answer *IQ test* style diagrammatic reasoning, in particular including Raven Progressive Matrices that have been previously studied (a little). The paper shows very strong results on multiple datasets, much stronger than previous results (from strong groups) on these datasets. On these grounds, I believe the paper should be accepted. However, the structure and writing of the paper was very frustrating to me. The paper just didn*t make much of an attempt to explain and then motivate/analyze the model used. I mean, if I were writing the paper, I would have considered and done many things, such as: - shortening the introduction - shortening the related work - making the presentation of the datasets more succinct - having only one figure that covers most of what is currently in figures 1 and 2 - putting details of what seem more ancillary details like the treatment of background lines objects in an appendix - remove Figure 3, which didn*t convey much to me in the absence of more careful explanation of the model. so that I could motivate, carefully explain, and evaluate the main model in the paper. But here, all these things fill the main text, and we*re told that we have to read the appendices to understand the model.... And the presentation in the appendix is more a dump-all-the-facts presentation than a careful development of the design. Nevertheless, the general direction of the architecture seems sound, and the results look very strong, and there are even some useful ablations in the appendix.", "role": "assistant" } ]
[ "This paper proposes using a new version of graph networks – multiplex graph networks – which do object representation followed by some form of graph processing and reasoning to answer *IQ test* style diagrammatic reasoning, in particular including Raven Progressive Matrices that have been previously studied (a little).", "The paper shows very strong results on multiple datasets, much stronger than previous results (from strong groups) on these datasets.", "On these grounds, I believe the paper should be accepted.", "However, the structure and writing of the paper was very frustrating to me.", "The paper just didn*t make much of an attempt to explain and then motivate/analyze the model used.", "I mean, if I were writing the paper, I would have considered and done many things, such as: - shortening the introduction - shortening the related work - making the presentation of the datasets more succinct - having only one figure that covers most of what is currently in figures 1 and 2 - putting details of what seem more ancillary details like the treatment of background lines objects in an appendix - remove Figure 3, which didn*t convey much to me in the absence of more careful explanation of the model.", "so that I could motivate, carefully explain, and evaluate the main model in the paper.", "But here, all these things fill the main text, and we*re told that we have to read the appendices to understand the model.... And the presentation in the appendix is more a dump-all-the-facts presentation than a careful development of the design.", "Nevertheless, the general direction of the architecture seems sound, and the results look very strong, and there are even some useful ablations in the appendix." ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 0, "materials_and_methods": 6, "praise": 3, "presentation_and_reporting": 5, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 9 }
0.333333
0.111111
0
0.666667
0.333333
0.555556
0.222222
0.222222
9
1
1
0
1
1
1
1
1
2.444444
2.191926
0.252519
Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires footnote{ notation hides constant factors.} episodes to find an -approximate stationary point of the nonconcave performance function (i.e., such that ). This sample complexity improves the existing result for stochastic variance reduced policy gradient algorithms by a factor of . In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.
The authors propose a new stochastic reduced variance policy gradient estimator, which combines a baseline GPOMDP estimator with a control variate integrating past gradients by importance re-weighting. The authors establish the sample complexity of gradient descent using the proposed estimator, and further demonstrate its effectiveness through some simple empirical results. I believe this paper is a good contribution for ICLR. The result is relevant and interesting, and extends recent ideas around reduced-variance policy gradient estimators. The paper is overall easy to read, and presents its ideas clearly. Some detailed comments: - The wording of theorem 4.5 and corollary 4.7 could be somewhat clarified. In particular, I did not see Phi defined in the main text, and given its definition in the appendix, I believe theorem 4.5 could be stated simply in terms of J, avoiding any additional notation. Similarly, corollary 4.7 could be stated somewhat more clearly, and in particular, the choice of S should be made explicit. In the appendix, I could not find a definition for phi. - The empirical results presented are interesting, although I wish they were more comprehensive. In particular, it would be valuable to more exhaustively evaluate the impact of the hyper-parameters N, B and m. The authors should also clarify how the current values were chosen. Given that the theoretical results also apply to projected gradient descent, it would be interesting to see empirical results in that case.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSample Efficient Policy Gradient Methods with Recursive Variance Reduction\nImproving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires footnote{ notation hides constant factors.} episodes to find an -approximate stationary point of the nonconcave performance function (i.e., such that ). This sample complexity improves the existing result for stochastic variance reduced policy gradient algorithms by a factor of . In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a new stochastic reduced variance policy gradient estimator, which combines a baseline GPOMDP estimator with a control variate integrating past gradients by importance re-weighting. The authors establish the sample complexity of gradient descent using the proposed estimator, and further demonstrate its effectiveness through some simple empirical results. I believe this paper is a good contribution for ICLR. The result is relevant and interesting, and extends recent ideas around reduced-variance policy gradient estimators. The paper is overall easy to read, and presents its ideas clearly. Some detailed comments: - The wording of theorem 4.5 and corollary 4.7 could be somewhat clarified. In particular, I did not see Phi defined in the main text, and given its definition in the appendix, I believe theorem 4.5 could be stated simply in terms of J, avoiding any additional notation. Similarly, corollary 4.7 could be stated somewhat more clearly, and in particular, the choice of S should be made explicit. In the appendix, I could not find a definition for phi. - The empirical results presented are interesting, although I wish they were more comprehensive. In particular, it would be valuable to more exhaustively evaluate the impact of the hyper-parameters N, B and m. The authors should also clarify how the current values were chosen. Given that the theoretical results also apply to projected gradient descent, it would be interesting to see empirical results in that case.", "role": "assistant" } ]
[ "The authors propose a new stochastic reduced variance policy gradient estimator, which combines a baseline GPOMDP estimator with a control variate integrating past gradients by importance re-weighting.", "The authors establish the sample complexity of gradient descent using the proposed estimator, and further demonstrate its effectiveness through some simple empirical results.", "I believe this paper is a good contribution for ICLR.", "The result is relevant and interesting, and extends recent ideas around reduced-variance policy gradient estimators.", "The paper is overall easy to read, and presents its ideas clearly.", "Some detailed comments: - The wording of theorem 4.5 and corollary 4.7 could be somewhat clarified.", "In particular, I did not see Phi defined in the main text, and given its definition in the appendix, I believe theorem 4.5 could be stated simply in terms of J, avoiding any additional notation.", "Similarly, corollary 4.7 could be stated somewhat more clearly, and in particular, the choice of S should be made explicit.", "In the appendix, I could not find a definition for phi.", "- The empirical results presented are interesting, although I wish they were more comprehensive.", "In particular, it would be valuable to more exhaustively evaluate the impact of the hyper-parameters N, B and m. The authors should also clarify how the current values were chosen.", "Given that the theoretical results also apply to projected gradient descent, it would be interesting to see empirical results in that case." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 5, "praise": 4, "presentation_and_reporting": 6, "results_and_discussion": 6, "suggestion_and_solution": 6, "total": 12 }
0
0
0.333333
0.416667
0.333333
0.5
0.5
0.5
12
0
0
1
1
1
1
1
1
2.583333
2.567551
0.015782
Deep Symbolic Superoptimization Without Human Knowledge
Deep symbolic superoptimization refers to the task of applying deep learning methods to simplify symbolic expressions. Existing approaches either perform supervised training on human-constructed datasets that defines equivalent expression pairs, or apply reinforcement learning with human-defined equivalent trans-formation actions. In short, almost all existing methods rely on human knowledge to define equivalence, which suffers from large labeling cost and learning bias, because it is almost impossible to define and comprehensive equivalent set. We thus propose HISS, a reinforcement learning framework for symbolic super-optimization that keeps human outside the loop. HISS introduces a tree-LSTM encoder-decoder network with attention to ensure tractable learning. Our experiments show that HISS can discover more simplification rules than existing human-dependent methods, and can learn meaningful embeddings for symbolic expressions, which are indicative of equivalence.
This paper provides a novel approach to the problem of simplifying symbolic expressions without relying on human input and information. To achieve this, they apply a REINFORCE framework with a reward function involving the number of symbols in the final output together with a probabilistic testing scheme to determine equivalence. The model itself consists of a tree-LSTM-based encoder-decoder module with attention, together with a sub tree selector. Their main contribution is that this framework works entirely independent of human-labelled data. In their experiments, they show that this deep learning approach outperforms their provided human-independent baselines, while sharing similar performance with human-dependent ones. Overall, the work in this paper has the potential to be a contribution to ICLR but lacks completeness and clarity. A number of experimental details are missing, making it difficult to understand the setup under which results were obtained. Moreover, the paper does not seem to have been revised with many grammatical issues that make it hard to read. The following are major issues within the paper and should be addressed: • The paper does not mention the amount of compute given to their model, nor the amount of time taken to train. As the REINFORCE framework is generally quite computation-heavy, these are significant details. Without assessing the amount of compute and time allotted for training HISS, the comparisons to previous baselines lose a fair amount of meaning. The paper alludes to processes being ‘extremely time consuming’, but then does not provide any numbers. • They do not mention the data used to train the model weights. In the comparisons sections, some details on datasets are given, but these seem to refer to data for inference. • There are many grammatical errors that likely could have been detected with revision. A handful of such errors would not affect the score, but they are so numerous as to make the paper much more difficult to understand. Additionally, these are comments that slightly detract from the quality of the paper: • It’s unclear what to glean from Section 5.1, as the dataset and baselines seem to be fairly trivial. If their claim is to have the first nontrivial human-independent approach to simplifying symbolic expressions, there is no need to compare to baselines that can only handle small expressions. • Sections 5.3 and 5.4 contribute little to the paper. For 5.3, the model was trained to embed equivalent expressions close together, using L2 regularization. It is therefore unsurprising that equivalent expressions are then closer together than non-equivalent ones. The paper also does not provide a comparison to the method without this regularization, and so it’s unclear if this embedding similarity helps in any way. For 5.4, the section is extremely short and contains very little content. Moreover, just as many of the variables in their provided examples oppose their conjectures as support them. • The most interesting figure provided is the rewrite rules discovered by the model. It would be even better if an additional column containing the rules discovered by Halide (the main baseline) were provided. Overall, in my understanding, the primary point in favor of the paper is in being the first nontrivial human-independent approach to simplifying symbolic expression. That said, this is not my area of expertise, so I cannot judge novelty or importance as well as other reviewers.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Symbolic Superoptimization Without Human Knowledge\nDeep symbolic superoptimization refers to the task of applying deep learning methods to simplify symbolic expressions. Existing approaches either perform supervised training on human-constructed datasets that defines equivalent expression pairs, or apply reinforcement learning with human-defined equivalent trans-formation actions. In short, almost all existing methods rely on human knowledge to define equivalence, which suffers from large labeling cost and learning bias, because it is almost impossible to define and comprehensive equivalent set. We thus propose HISS, a reinforcement learning framework for symbolic super-optimization that keeps human outside the loop. HISS introduces a tree-LSTM encoder-decoder network with attention to ensure tractable learning. Our experiments show that HISS can discover more simplification rules than existing human-dependent methods, and can learn meaningful embeddings for symbolic expressions, which are indicative of equivalence.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper provides a novel approach to the problem of simplifying symbolic expressions without relying on human input and information. To achieve this, they apply a REINFORCE framework with a reward function involving the number of symbols in the final output together with a probabilistic testing scheme to determine equivalence. The model itself consists of a tree-LSTM-based encoder-decoder module with attention, together with a sub tree selector. Their main contribution is that this framework works entirely independent of human-labelled data. In their experiments, they show that this deep learning approach outperforms their provided human-independent baselines, while sharing similar performance with human-dependent ones. Overall, the work in this paper has the potential to be a contribution to ICLR but lacks completeness and clarity. A number of experimental details are missing, making it difficult to understand the setup under which results were obtained. Moreover, the paper does not seem to have been revised with many grammatical issues that make it hard to read. The following are major issues within the paper and should be addressed: • The paper does not mention the amount of compute given to their model, nor the amount of time taken to train. As the REINFORCE framework is generally quite computation-heavy, these are significant details. Without assessing the amount of compute and time allotted for training HISS, the comparisons to previous baselines lose a fair amount of meaning. The paper alludes to processes being ‘extremely time consuming’, but then does not provide any numbers. • They do not mention the data used to train the model weights. In the comparisons sections, some details on datasets are given, but these seem to refer to data for inference. • There are many grammatical errors that likely could have been detected with revision. A handful of such errors would not affect the score, but they are so numerous as to make the paper much more difficult to understand. Additionally, these are comments that slightly detract from the quality of the paper: • It’s unclear what to glean from Section 5.1, as the dataset and baselines seem to be fairly trivial. If their claim is to have the first nontrivial human-independent approach to simplifying symbolic expressions, there is no need to compare to baselines that can only handle small expressions. • Sections 5.3 and 5.4 contribute little to the paper. For 5.3, the model was trained to embed equivalent expressions close together, using L2 regularization. It is therefore unsurprising that equivalent expressions are then closer together than non-equivalent ones. The paper also does not provide a comparison to the method without this regularization, and so it’s unclear if this embedding similarity helps in any way. For 5.4, the section is extremely short and contains very little content. Moreover, just as many of the variables in their provided examples oppose their conjectures as support them. • The most interesting figure provided is the rewrite rules discovered by the model. It would be even better if an additional column containing the rules discovered by Halide (the main baseline) were provided. Overall, in my understanding, the primary point in favor of the paper is in being the first nontrivial human-independent approach to simplifying symbolic expression. That said, this is not my area of expertise, so I cannot judge novelty or importance as well as other reviewers.", "role": "assistant" } ]
[ "This paper provides a novel approach to the problem of simplifying symbolic expressions without relying on human input and information.", "To achieve this, they apply a REINFORCE framework with a reward function involving the number of symbols in the final output together with a probabilistic testing scheme to determine equivalence.", "The model itself consists of a tree-LSTM-based encoder-decoder module with attention, together with a sub tree selector.", "Their main contribution is that this framework works entirely independent of human-labelled data.", "In their experiments, they show that this deep learning approach outperforms their provided human-independent baselines, while sharing similar performance with human-dependent ones.", "Overall, the work in this paper has the potential to be a contribution to ICLR but lacks completeness and clarity.", "A number of experimental details are missing, making it difficult to understand the setup under which results were obtained.", "Moreover, the paper does not seem to have been revised with many grammatical issues that make it hard to read.", "The following are major issues within the paper and should be addressed: • The paper does not mention the amount of compute given to their model, nor the amount of time taken to train.", "As the REINFORCE framework is generally quite computation-heavy, these are significant details.", "Without assessing the amount of compute and time allotted for training HISS, the comparisons to previous baselines lose a fair amount of meaning.", "The paper alludes to processes being ‘extremely time consuming’, but then does not provide any numbers.", "• They do not mention the data used to train the model weights.", "In the comparisons sections, some details on datasets are given, but these seem to refer to data for inference.", "• There are many grammatical errors that likely could have been detected with revision.", "A handful of such errors would not affect the score, but they are so numerous as to make the paper much more difficult to understand.", "Additionally, these are comments that slightly detract from the quality of the paper: • It’s unclear what to glean from Section 5.1, as the dataset and baselines seem to be fairly trivial.", "If their claim is to have the first nontrivial human-independent approach to simplifying symbolic expressions, there is no need to compare to baselines that can only handle small expressions.", "• Sections 5.3 and 5.4 contribute little to the paper.", "For 5.3, the model was trained to embed equivalent expressions close together, using L2 regularization.", "It is therefore unsurprising that equivalent expressions are then closer together than non-equivalent ones.", "The paper also does not provide a comparison to the method without this regularization, and so it’s unclear if this embedding similarity helps in any way.", "For 5.4, the section is extremely short and contains very little content.", "Moreover, just as many of the variables in their provided examples oppose their conjectures as support them.", "• The most interesting figure provided is the rewrite rules discovered by the model.", "It would be even better if an additional column containing the rules discovered by Halide (the main baseline) were provided.", "Overall, in my understanding, the primary point in favor of the paper is in being the first nontrivial human-independent approach to simplifying symbolic expression.", "That said, this is not my area of expertise, so I cannot judge novelty or importance as well as other reviewers." ]
{ "criticism": 17, "example": 0, "importance_and_relevance": 6, "materials_and_methods": 18, "praise": 2, "presentation_and_reporting": 12, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 28 }
0.607143
0
0.214286
0.642857
0.071429
0.428571
0.107143
0.107143
28
1
0
1
1
1
1
1
1
2.178571
-1.372475
3.551047
Explain Your Move: Understanding Agent Actions Using Focused Feature Saliency
As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our approach generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweights irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare our approach with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches.
Interpreting the policies of RL agents is an important consideration if we would like to actually deploy them and understand their behaviours. Prior works have applied saliency methods to DRL agents, but the authors note that there are two properties - specificity and relevance - that these methods do not take into account, and therefore result in misleading saliency maps. The authors define (and provide examples) of these properties, propose simple ways to calculate these (and like prior methods, relying on perturbations and therefore applicable to almost black box agents), and combine them neatly using the harmonic mean to provide a new way to calculate saliency maps for agents with discrete action spaces. While the improvements on Atari are hard to quantify, the results on chess and go are more interpretable and hence more convincing. The study using saliency maps to aid human chess players is rather neat, and again adds evidence towards the usefulness of this technique. Finally, the chess saliency dataset is an exciting contribution that can actually be used to quantify saliency methods for DRL agents. The proposed method is relatively simple, but is well-motivated and demonstratively (quantitatively, which is great work for general interpretability methods) better than prior methods, and in addition the authors introduce a new dataset + quantitative measure for saliency methods, so I would give this paper an accept. Although the authors motivate their choice of perturbation-based saliency methods as opposed to gradient-based methods, they should expand their review of the latter. As the technique the authors introduced is very general, it would be useful to know how it compares to the current state of research in terms of identifying properties that attribution methods should meet - a good example of this is integrated gradients (Sundararajan et al., 2017), which similarly identify *sensitivity* and *implementation invariance* as *axioms* that their method satisfies.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExplain Your Move: Understanding Agent Actions Using Focused Feature Saliency\nAs deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our approach generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweights irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare our approach with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nInterpreting the policies of RL agents is an important consideration if we would like to actually deploy them and understand their behaviours. Prior works have applied saliency methods to DRL agents, but the authors note that there are two properties - specificity and relevance - that these methods do not take into account, and therefore result in misleading saliency maps. The authors define (and provide examples) of these properties, propose simple ways to calculate these (and like prior methods, relying on perturbations and therefore applicable to almost black box agents), and combine them neatly using the harmonic mean to provide a new way to calculate saliency maps for agents with discrete action spaces. While the improvements on Atari are hard to quantify, the results on chess and go are more interpretable and hence more convincing. The study using saliency maps to aid human chess players is rather neat, and again adds evidence towards the usefulness of this technique. Finally, the chess saliency dataset is an exciting contribution that can actually be used to quantify saliency methods for DRL agents. The proposed method is relatively simple, but is well-motivated and demonstratively (quantitatively, which is great work for general interpretability methods) better than prior methods, and in addition the authors introduce a new dataset + quantitative measure for saliency methods, so I would give this paper an accept. Although the authors motivate their choice of perturbation-based saliency methods as opposed to gradient-based methods, they should expand their review of the latter. As the technique the authors introduced is very general, it would be useful to know how it compares to the current state of research in terms of identifying properties that attribution methods should meet - a good example of this is integrated gradients (Sundararajan et al., 2017), which similarly identify *sensitivity* and *implementation invariance* as *axioms* that their method satisfies.", "role": "assistant" } ]
[ "Interpreting the policies of RL agents is an important consideration if we would like to actually deploy them and understand their behaviours.", "Prior works have applied saliency methods to DRL agents, but the authors note that there are two properties - specificity and relevance - that these methods do not take into account, and therefore result in misleading saliency maps.", "The authors define (and provide examples) of these properties, propose simple ways to calculate these (and like prior methods, relying on perturbations and therefore applicable to almost black box agents), and combine them neatly using the harmonic mean to provide a new way to calculate saliency maps for agents with discrete action spaces.", "While the improvements on Atari are hard to quantify, the results on chess and go are more interpretable and hence more convincing.", "The study using saliency maps to aid human chess players is rather neat, and again adds evidence towards the usefulness of this technique.", "Finally, the chess saliency dataset is an exciting contribution that can actually be used to quantify saliency methods for DRL agents.", "The proposed method is relatively simple, but is well-motivated and demonstratively (quantitatively, which is great work for general interpretability methods) better than prior methods, and in addition the authors introduce a new dataset + quantitative measure for saliency methods, so I would give this paper an accept.", "Although the authors motivate their choice of perturbation-based saliency methods as opposed to gradient-based methods, they should expand their review of the latter.", "As the technique the authors introduced is very general, it would be useful to know how it compares to the current state of research in terms of identifying properties that attribution methods should meet - a good example of this is integrated gradients (Sundararajan et al., 2017), which similarly identify *sensitivity* and *implementation invariance* as *axioms* that their method satisfies." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 7, "materials_and_methods": 8, "praise": 5, "presentation_and_reporting": 1, "results_and_discussion": 4, "suggestion_and_solution": 5, "total": 9 }
0.222222
0
0.777778
0.888889
0.555556
0.111111
0.444444
0.555556
9
1
0
1
1
1
1
1
1
3.555556
3.303037
0.252519
Universal Approximation with Certified Networks
Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks. However, while promising, state-of-the-art results with certified training are far from satisfactory. Currently, it is very difficult to train a neural network that is both accurate and certifiably robust. In this work we take a step towards addressing this challenge. We prove that for every continuous function , there exists a network such that: (i) approximates arbitrarily close, and (ii) simple interval bound propagation of a region through yields a result that is arbitrarily close to the optimal output of on . Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.
This paper proves the universal approximation property of interval-certified ReLU networks. Such a network achieves the best certified accuracy against l_infty adversaries at large epsilons on MNIST and CIFAR10, but the certified accuracy is still far from satisfaction. With the results from this paper, we can at least make sure the interval-certified ReLU networks is able to fit the robust classifier with arbitrarily small error if it exists. I did not check the details of the proofs, but I think such a theorem is a great reference for future work. However, we cannot conclude whether a network with a moderate size exists to fit the robust classifier well from this paper. From the construction of local bumps (nmin), the number of neurons needed to represent a bump function grows exponentially with the input dimension, which makes the size of the network vacuously large when the input is images. We are not getting any closer to improving the certified robustness with this paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUniversal Approximation with Certified Networks\nTraining neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks. However, while promising, state-of-the-art results with certified training are far from satisfactory. Currently, it is very difficult to train a neural network that is both accurate and certifiably robust. In this work we take a step towards addressing this challenge. We prove that for every continuous function , there exists a network such that: (i) approximates arbitrarily close, and (ii) simple interval bound propagation of a region through yields a result that is arbitrarily close to the optimal output of on . Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proves the universal approximation property of interval-certified ReLU networks. Such a network achieves the best certified accuracy against l_infty adversaries at large epsilons on MNIST and CIFAR10, but the certified accuracy is still far from satisfaction. With the results from this paper, we can at least make sure the interval-certified ReLU networks is able to fit the robust classifier with arbitrarily small error if it exists. I did not check the details of the proofs, but I think such a theorem is a great reference for future work. However, we cannot conclude whether a network with a moderate size exists to fit the robust classifier well from this paper. From the construction of local bumps (nmin), the number of neurons needed to represent a bump function grows exponentially with the input dimension, which makes the size of the network vacuously large when the input is images. We are not getting any closer to improving the certified robustness with this paper.", "role": "assistant" } ]
[ "This paper proves the universal approximation property of interval-certified ReLU networks.", "Such a network achieves the best certified accuracy against l_infty adversaries at large epsilons on MNIST and CIFAR10, but the certified accuracy is still far from satisfaction.", "With the results from this paper, we can at least make sure the interval-certified ReLU networks is able to fit the robust classifier with arbitrarily small error if it exists.", "I did not check the details of the proofs, but I think such a theorem is a great reference for future work.", "However, we cannot conclude whether a network with a moderate size exists to fit the robust classifier well from this paper.", "From the construction of local bumps (nmin), the number of neurons needed to represent a bump function grows exponentially with the input dimension, which makes the size of the network vacuously large when the input is images.", "We are not getting any closer to improving the certified robustness with this paper." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 7 }
0.571429
0
0.142857
0.857143
0.142857
0.285714
0.142857
0.142857
7
1
0
1
1
1
1
1
1
2.285714
1.717547
0.568167
Measuring and Improving the Use of Graph Information in Graph Neural Networks
Graph neural networks (GNNs) have been widely used for representation learning on graph data. However, there is limited understanding on how much performance GNNs actually gain from graph data. This paper introduces a context-surrounding GNN framework and proposes two smoothness metrics to measure the quantity and quality of information obtained from graph data. A new, improved GNN model, called CS-GNN, is then devised to improve the use of graph information based on the smoothness values of a graph. CS-GNN is shown to achieve better performance than existing methods in different types of real graphs.
This paper proposes two smoothness metrics to measure the quantity and quality of the graph information that GNNs employ in the model. With the smoothness measures, this paper proposes a new GNN model to improve the use of graph information. Overall, the paper is well-organized and clearly written. The main concern is the novelty, since the proposed method is pretty close to the graph attention network (GAT), except using the two smoothness metrics and a slightly different way to compute the attention coefficients. Experimental results show marginal improvement over existing GNN methods on the node classification task. Given these aspects, it is not that convincing that the introduced smoothness metrics are necessary. I would like to recommend a weak reject for this paper. Suggestions to improve the paper: 1) It would be better to provide more convincing evidence and motivation for the smoothness metrics, either in theory or empirical analysis. 2) When describing the proposed method, organize the key differences in a more clear way. For example, add the proposed method directly in Table 1 and summarize the key differences in bullet points. Currently, this important part is deferred to Section 3.3, which may cause confusion for the readers to understand the paper. 3) In Section 3.2, this paper claims that the proposed method can easily include side information on graphs to improve performance, by using the topology feature in the last fully connected layer for class prediction. However, this technique can also be used in existing GNN models, and it is not clear why this is described as something unique for the proposed method. Also, there is no corresponding ablation experiments to compare the performance of the proposed method with and without using local topology features.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMeasuring and Improving the Use of Graph Information in Graph Neural Networks\nGraph neural networks (GNNs) have been widely used for representation learning on graph data. However, there is limited understanding on how much performance GNNs actually gain from graph data. This paper introduces a context-surrounding GNN framework and proposes two smoothness metrics to measure the quantity and quality of information obtained from graph data. A new, improved GNN model, called CS-GNN, is then devised to improve the use of graph information based on the smoothness values of a graph. CS-GNN is shown to achieve better performance than existing methods in different types of real graphs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes two smoothness metrics to measure the quantity and quality of the graph information that GNNs employ in the model. With the smoothness measures, this paper proposes a new GNN model to improve the use of graph information. Overall, the paper is well-organized and clearly written. The main concern is the novelty, since the proposed method is pretty close to the graph attention network (GAT), except using the two smoothness metrics and a slightly different way to compute the attention coefficients. Experimental results show marginal improvement over existing GNN methods on the node classification task. Given these aspects, it is not that convincing that the introduced smoothness metrics are necessary. I would like to recommend a weak reject for this paper. Suggestions to improve the paper: 1) It would be better to provide more convincing evidence and motivation for the smoothness metrics, either in theory or empirical analysis. 2) When describing the proposed method, organize the key differences in a more clear way. For example, add the proposed method directly in Table 1 and summarize the key differences in bullet points. Currently, this important part is deferred to Section 3.3, which may cause confusion for the readers to understand the paper. 3) In Section 3.2, this paper claims that the proposed method can easily include side information on graphs to improve performance, by using the topology feature in the last fully connected layer for class prediction. However, this technique can also be used in existing GNN models, and it is not clear why this is described as something unique for the proposed method. Also, there is no corresponding ablation experiments to compare the performance of the proposed method with and without using local topology features.", "role": "assistant" } ]
[ "This paper proposes two smoothness metrics to measure the quantity and quality of the graph information that GNNs employ in the model.", "With the smoothness measures, this paper proposes a new GNN model to improve the use of graph information.", "Overall, the paper is well-organized and clearly written.", "The main concern is the novelty, since the proposed method is pretty close to the graph attention network (GAT), except using the two smoothness metrics and a slightly different way to compute the attention coefficients.", "Experimental results show marginal improvement over existing GNN methods on the node classification task.", "Given these aspects, it is not that convincing that the introduced smoothness metrics are necessary.", "I would like to recommend a weak reject for this paper.", "Suggestions to improve the paper: 1) It would be better to provide more convincing evidence and motivation for the smoothness metrics, either in theory or empirical analysis.", "2) When describing the proposed method, organize the key differences in a more clear way.", "For example, add the proposed method directly in Table 1 and summarize the key differences in bullet points.", "Currently, this important part is deferred to Section 3.3, which may cause confusion for the readers to understand the paper.", "3) In Section 3.2, this paper claims that the proposed method can easily include side information on graphs to improve performance, by using the topology feature in the last fully connected layer for class prediction.", "However, this technique can also be used in existing GNN models, and it is not clear why this is described as something unique for the proposed method.", "Also, there is no corresponding ablation experiments to compare the performance of the proposed method with and without using local topology features." ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 3, "materials_and_methods": 11, "praise": 1, "presentation_and_reporting": 5, "results_and_discussion": 2, "suggestion_and_solution": 5, "total": 14 }
0.285714
0.071429
0.214286
0.785714
0.071429
0.357143
0.142857
0.357143
14
1
1
1
1
1
1
1
1
2.285714
2.269932
0.015782
Learning transport cost from subset correspondence
Learning to align multiple datasets is an important problem with many applications, and it is especially useful when we need to integrate multiple experiments or correct for confounding. Optimal transport (OT) is a principled approach to align datasets, but a key challenge in applying OT is that we need to specify a cost function that accurately captures how the two datasets are related. Reliable cost functions are typically not available and practitioners often resort to using hand-crafted or Euclidean cost even if it may not be appropriate. In this work, we investigate how to learn the cost function using a small amount of side information which is often available. The side information we consider captures subset correspondence---i.e. certain subsets of points in the two data sets are known to be related. For example, we may have some images labeled as cars in both datasets; or we may have a common annotated cell type in single-cell data from two batches. We develop an end-to-end optimizer (OT-SI) that differentiates through the Sinkhorn algorithm and effectively learns the suitable cost function from side information. On systematic experiments in images, marriage-matching and single-cell RNA-seq, our method substantially outperform state-of-the-art benchmarks.
# Summary This paper proposes a new way to learn the optimal transport (OT) cost between two datasets that can utilize subset correspondence information. The main idea is to use a neural network to define a transport cost between two examples instead of using the popular Euclidean distance and have a L2 regularizer that encourages the resulting transport plan to be small if two examples belong to the same subset. The results on synthetic datasets (2D, MNIST image) and real-world datasets (RNA sequence) show that the learned transport cost outperforms the Euclidean distance and Procrustes-based OT. # Originality - This paper considers a new problem which is to learn OT given *subset correspondence* information. The prior work does not have a mechanism to utilize such information or requires more side information such as pair-wise matching. # Quality - The idea of parameterizing the transport cost using a neural network is novel to my knowledge. The idea of regularizing the transport cost based on the subset correspondence information sounds sensible. - The empirical result is quite comprehensive and convincing in that it includes from illustrative examples to real-world datasets (RNAs) and includes many relevant baselines. # Clarity - The paper is well-organized and well-written. - Since this paper considers a new problem (learning OT with subset correspondence data), it would be great to motivate and explain the problem better in the introduction. RNA-seq examples in the introduction were not easy to parse as I do not have any background on biology. It would be better if the paper presented more illustrative/intuitive examples ideally with a figure that describes the problem. - *an principled* -> *a principled* # Significance - This paper seems to present a neat and sensible idea for learning OT with neural networks given subset correspondence information. The problem and method are new, and the empirical results look comprehensive. One minor issue is that it is unclear how significant this new problem setup is (learning OT cost with subset correspondence). Although the paper presents real-world examples (RNAs), the paper could be stronger if it presented more real-world examples/results and emphasized that this is an important problem to solve.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning transport cost from subset correspondence\nLearning to align multiple datasets is an important problem with many applications, and it is especially useful when we need to integrate multiple experiments or correct for confounding. Optimal transport (OT) is a principled approach to align datasets, but a key challenge in applying OT is that we need to specify a cost function that accurately captures how the two datasets are related. Reliable cost functions are typically not available and practitioners often resort to using hand-crafted or Euclidean cost even if it may not be appropriate. In this work, we investigate how to learn the cost function using a small amount of side information which is often available. The side information we consider captures subset correspondence---i.e. certain subsets of points in the two data sets are known to be related. For example, we may have some images labeled as cars in both datasets; or we may have a common annotated cell type in single-cell data from two batches. We develop an end-to-end optimizer (OT-SI) that differentiates through the Sinkhorn algorithm and effectively learns the suitable cost function from side information. On systematic experiments in images, marriage-matching and single-cell RNA-seq, our method substantially outperform state-of-the-art benchmarks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n# Summary This paper proposes a new way to learn the optimal transport (OT) cost between two datasets that can utilize subset correspondence information. The main idea is to use a neural network to define a transport cost between two examples instead of using the popular Euclidean distance and have a L2 regularizer that encourages the resulting transport plan to be small if two examples belong to the same subset. The results on synthetic datasets (2D, MNIST image) and real-world datasets (RNA sequence) show that the learned transport cost outperforms the Euclidean distance and Procrustes-based OT. # Originality - This paper considers a new problem which is to learn OT given *subset correspondence* information. The prior work does not have a mechanism to utilize such information or requires more side information such as pair-wise matching. # Quality - The idea of parameterizing the transport cost using a neural network is novel to my knowledge. The idea of regularizing the transport cost based on the subset correspondence information sounds sensible. - The empirical result is quite comprehensive and convincing in that it includes from illustrative examples to real-world datasets (RNAs) and includes many relevant baselines. # Clarity - The paper is well-organized and well-written. - Since this paper considers a new problem (learning OT with subset correspondence data), it would be great to motivate and explain the problem better in the introduction. RNA-seq examples in the introduction were not easy to parse as I do not have any background on biology. It would be better if the paper presented more illustrative/intuitive examples ideally with a figure that describes the problem. - *an principled* -> *a principled* # Significance - This paper seems to present a neat and sensible idea for learning OT with neural networks given subset correspondence information. The problem and method are new, and the empirical results look comprehensive. One minor issue is that it is unclear how significant this new problem setup is (learning OT cost with subset correspondence). Although the paper presents real-world examples (RNAs), the paper could be stronger if it presented more real-world examples/results and emphasized that this is an important problem to solve.", "role": "assistant" } ]
[ "# Summary This paper proposes a new way to learn the optimal transport (OT) cost between two datasets that can utilize subset correspondence information.", "The main idea is to use a neural network to define a transport cost between two examples instead of using the popular Euclidean distance and have a L2 regularizer that encourages the resulting transport plan to be small if two examples belong to the same subset.", "The results on synthetic datasets (2D, MNIST image) and real-world datasets (RNA sequence) show that the learned transport cost outperforms the Euclidean distance and Procrustes-based OT.", "# Originality - This paper considers a new problem which is to learn OT given *subset correspondence* information.", "The prior work does not have a mechanism to utilize such information or requires more side information such as pair-wise matching.", "# Quality - The idea of parameterizing the transport cost using a neural network is novel to my knowledge.", "The idea of regularizing the transport cost based on the subset correspondence information sounds sensible.", "- The empirical result is quite comprehensive and convincing in that it includes from illustrative examples to real-world datasets (RNAs) and includes many relevant baselines.", "# Clarity - The paper is well-organized and well-written.", "- Since this paper considers a new problem (learning OT with subset correspondence data), it would be great to motivate and explain the problem better in the introduction.", "RNA-seq examples in the introduction were not easy to parse as I do not have any background on biology.", "It would be better if the paper presented more illustrative/intuitive examples ideally with a figure that describes the problem.", "- *an principled* -> *a principled* # Significance - This paper seems to present a neat and sensible idea for learning OT with neural networks given subset correspondence information.", "The problem and method are new, and the empirical results look comprehensive.", "One minor issue is that it is unclear how significant this new problem setup is (learning OT cost with subset correspondence).", "Although the paper presents real-world examples (RNAs), the paper could be stronger if it presented more real-world examples/results and emphasized that this is an important problem to solve." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 8, "materials_and_methods": 10, "praise": 4, "presentation_and_reporting": 5, "results_and_discussion": 8, "suggestion_and_solution": 4, "total": 16 }
0.125
0
0.5
0.625
0.25
0.3125
0.5
0.25
16
1
0
1
1
1
1
1
1
2.5625
2.420458
0.142042
Fantastic Generalization Measures and Where to Find Them
Generalization of deep networks has been intensely researched in recent years, resulting in a number of theoretical bounds and empirically motivated measures. However, most papers proposing such measures only study a small set of models, leaving open the question of whether these measures are truly useful in practice. We present the first large scale study of generalization bounds and measures in deep networks. We train over two thousand CIFAR-10 networks with systematic changes in important hyper-parameters. We attempt to uncover potential causal relationships between each measure and generalization, by using rank correlation coefficient and its modified forms. We analyze the results and show that some of the studied measures are very promising for further research.
The paper addresses the well-known generalization problem for deep networks: why do these models successfully generalize even though they contain more parameters than the size of the training set? Recent works on this problem have proposed various measures of model complexity (that is, different from the number of parameters) with the claim that they (at least partially) capture the *true* capacity of deep networks in order to answer this question. The authors report the results of a large-scale, systematic study of the empirical effect of these complexity measures on the observed generalization behavior of deep networks. The empirical study of generalization behavior is inherently tricky, and the authors make a non-trivial contribution just by designing an experiment that can convincingly address this question. The authors give careful attention to various potential obstacles related to experimental design and statistical validity. The results of the authors* experiments are highly valuable to the research community. I recommend to accept. ********************************************************************************* Technical comments for the authors: -Please rewrite or elaborate on the first two paragraphs of section 6 for clarity. -It seems to make little sense to consider norm-based complexity measures that are not margin-normalized (or at least normalized in some other way). Please include a margin-normalized version of each such measure in Table 5; certain key quantities (e.g. *frob-distance*) have no corresponding margin-normalized entry. -The quantity *sum-of-frob/margin* is certainly a quantity of interest and it should be included in table 2 and possibly merits some discussion. -I disagree somewhat with the discussion of the spectral bound in section 7. In particular, I would not agree that the poor performance of *spec-orig* is surprising, because it contains an astronomically-sized *proof artifact* (as you call it). Namely, in this bound and in similar norm-based bounds, the term which is the product of the matrices in the network is (heuristically) unnecessary; in particular it (heuristically) ought to be able to be replaced with the network*s *average-case Lipschitz constant* (this is one of the issues which Arora et al. 2018 attempted to address; unfortunately formalizing an adequate notion of *average-case Lipschitz constant* can be quite cumbersome). This superfluous product of matrix norms is of course highly correlated with model size, so it is not surprising that the quantity would have a sizable negative correlation with generalization. The quantity I would consider to be the *main term* is the quantity *fro/spec*. Furthermore, the fact that this quantity (which appears in both Bartlett et al. and Arora et al.) performs somewhat worse than simply *sum-of-fro* is fairly interesting and possibly merits some discussion.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFantastic Generalization Measures and Where to Find Them\nGeneralization of deep networks has been intensely researched in recent years, resulting in a number of theoretical bounds and empirically motivated measures. However, most papers proposing such measures only study a small set of models, leaving open the question of whether these measures are truly useful in practice. We present the first large scale study of generalization bounds and measures in deep networks. We train over two thousand CIFAR-10 networks with systematic changes in important hyper-parameters. We attempt to uncover potential causal relationships between each measure and generalization, by using rank correlation coefficient and its modified forms. We analyze the results and show that some of the studied measures are very promising for further research.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper addresses the well-known generalization problem for deep networks: why do these models successfully generalize even though they contain more parameters than the size of the training set? Recent works on this problem have proposed various measures of model complexity (that is, different from the number of parameters) with the claim that they (at least partially) capture the *true* capacity of deep networks in order to answer this question. The authors report the results of a large-scale, systematic study of the empirical effect of these complexity measures on the observed generalization behavior of deep networks. The empirical study of generalization behavior is inherently tricky, and the authors make a non-trivial contribution just by designing an experiment that can convincingly address this question. The authors give careful attention to various potential obstacles related to experimental design and statistical validity. The results of the authors* experiments are highly valuable to the research community. I recommend to accept. ********************************************************************************* Technical comments for the authors: -Please rewrite or elaborate on the first two paragraphs of section 6 for clarity. -It seems to make little sense to consider norm-based complexity measures that are not margin-normalized (or at least normalized in some other way). Please include a margin-normalized version of each such measure in Table 5; certain key quantities (e.g. *frob-distance*) have no corresponding margin-normalized entry. -The quantity *sum-of-frob/margin* is certainly a quantity of interest and it should be included in table 2 and possibly merits some discussion. -I disagree somewhat with the discussion of the spectral bound in section 7. In particular, I would not agree that the poor performance of *spec-orig* is surprising, because it contains an astronomically-sized *proof artifact* (as you call it). Namely, in this bound and in similar norm-based bounds, the term which is the product of the matrices in the network is (heuristically) unnecessary; in particular it (heuristically) ought to be able to be replaced with the network*s *average-case Lipschitz constant* (this is one of the issues which Arora et al. 2018 attempted to address; unfortunately formalizing an adequate notion of *average-case Lipschitz constant* can be quite cumbersome). This superfluous product of matrix norms is of course highly correlated with model size, so it is not surprising that the quantity would have a sizable negative correlation with generalization. The quantity I would consider to be the *main term* is the quantity *fro/spec*. Furthermore, the fact that this quantity (which appears in both Bartlett et al. and Arora et al.) performs somewhat worse than simply *sum-of-fro* is fairly interesting and possibly merits some discussion.", "role": "assistant" } ]
[ "The paper addresses the well-known generalization problem for deep networks: why do these models successfully generalize even though they contain more parameters than the size of the training set?", "Recent works on this problem have proposed various measures of model complexity (that is, different from the number of parameters) with the claim that they (at least partially) capture the *true* capacity of deep networks in order to answer this question.", "The authors report the results of a large-scale, systematic study of the empirical effect of these complexity measures on the observed generalization behavior of deep networks.", "The empirical study of generalization behavior is inherently tricky, and the authors make a non-trivial contribution just by designing an experiment that can convincingly address this question.", "The authors give careful attention to various potential obstacles related to experimental design and statistical validity.", "The results of the authors* experiments are highly valuable to the research community.", "I recommend to accept.", "********************************************************************************* Technical comments for the authors: -Please rewrite or elaborate on the first two paragraphs of section 6 for clarity.", "-It seems to make little sense to consider norm-based complexity measures that are not margin-normalized (or at least normalized in some other way).", "Please include a margin-normalized version of each such measure in Table 5; certain key quantities (e.g.", "*frob-distance*) have no corresponding margin-normalized entry.", "-The quantity *sum-of-frob/margin* is certainly a quantity of interest and it should be included in table 2 and possibly merits some discussion.", "-I disagree somewhat with the discussion of the spectral bound in section 7.", "In particular, I would not agree that the poor performance of *spec-orig* is surprising, because it contains an astronomically-sized *proof artifact* (as you call it).", "Namely, in this bound and in similar norm-based bounds, the term which is the product of the matrices in the network is (heuristically) unnecessary; in particular it (heuristically) ought to be able to be replaced with the network*s *average-case Lipschitz constant* (this is one of the issues which Arora et al.", "2018 attempted to address; unfortunately formalizing an adequate notion of *average-case Lipschitz constant* can be quite cumbersome).", "This superfluous product of matrix norms is of course highly correlated with model size, so it is not surprising that the quantity would have a sizable negative correlation with generalization.", "The quantity I would consider to be the *main term* is the quantity *fro/spec*.", "Furthermore, the fact that this quantity (which appears in both Bartlett et al.", "and Arora et al.)", "performs somewhat worse than simply *sum-of-fro* is fairly interesting and possibly merits some discussion." ]
{ "criticism": 5, "example": 2, "importance_and_relevance": 4, "materials_and_methods": 9, "praise": 4, "presentation_and_reporting": 6, "results_and_discussion": 9, "suggestion_and_solution": 7, "total": 21 }
0.238095
0.095238
0.190476
0.428571
0.190476
0.285714
0.428571
0.333333
21
1
1
1
1
1
1
1
1
2.190476
1.180401
1.010076
Robustness Verification for Transformers
Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding the behavior of a given model and for obtaining safety guarantees. However, previous methods are usually limited to relatively simple neural networks. In this paper, we consider the robustness verification problem for Transformers. Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous work. We resolve these challenges and develop the first verification algorithm for Transformers. The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation. These bounds also shed light on interpreting Transformers as they consistently reflect the importance of words in sentiment analysis.
Contributions: This paper develops an algorithm for verifying the robustness of transformers with self-attention layers when the inputs for one input word embedding are perturbed. Unlike previous work the present work can deal with cross nonlinearity and cross position dependency and the lower bounds derived in the paper are much tighter than the Interval Boundary Propagation (IBP) method which uses backward propagation. The core contribution is expounded by developing bounds for multiplication (xy) and division (x/y) and using this to compute tight bounds on self-attention layer computations. Further by introducing a forward process then combining it with a backward process, they can substantially reduce computation time as compared to the IBP method (in experiments demonstrated to be over an order of magnitude). Experimental results: Very reasonable and well thought out experiments demonstrate that a) the lower bounds produced by the method are very tight (much better than IBP for instance), and the backward&forward approach is an order of magnitude faster than the fully backward approach though they are both equally tight (the fully forward method is much looser by contrast) The experiments on using the bound to detect the word whose perturbation will cause biggest impact (ie the most important word for explaining the decision) is much less convincing. In particular in Table 3, the second example shows that the method identifiews *a*, *the* *our* etc, which at least do not appear to be most salient to this reviewer. [Question to authors: perhaps I misunderstood something here -- if so please explain and help me understand it better]. Evaluation / Suggestions. Overall I liked the paper quite a bit. I may not have fully understood something about table 3 and would welcome some explanation about why the results shown there indicate that their method works better int he second example.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRobustness Verification for Transformers\nRobustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding the behavior of a given model and for obtaining safety guarantees. However, previous methods are usually limited to relatively simple neural networks. In this paper, we consider the robustness verification problem for Transformers. Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous work. We resolve these challenges and develop the first verification algorithm for Transformers. The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation. These bounds also shed light on interpreting Transformers as they consistently reflect the importance of words in sentiment analysis.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nContributions: This paper develops an algorithm for verifying the robustness of transformers with self-attention layers when the inputs for one input word embedding are perturbed. Unlike previous work the present work can deal with cross nonlinearity and cross position dependency and the lower bounds derived in the paper are much tighter than the Interval Boundary Propagation (IBP) method which uses backward propagation. The core contribution is expounded by developing bounds for multiplication (xy) and division (x/y) and using this to compute tight bounds on self-attention layer computations. Further by introducing a forward process then combining it with a backward process, they can substantially reduce computation time as compared to the IBP method (in experiments demonstrated to be over an order of magnitude). Experimental results: Very reasonable and well thought out experiments demonstrate that a) the lower bounds produced by the method are very tight (much better than IBP for instance), and the backward&forward approach is an order of magnitude faster than the fully backward approach though they are both equally tight (the fully forward method is much looser by contrast) The experiments on using the bound to detect the word whose perturbation will cause biggest impact (ie the most important word for explaining the decision) is much less convincing. In particular in Table 3, the second example shows that the method identifiews *a*, *the* *our* etc, which at least do not appear to be most salient to this reviewer. [Question to authors: perhaps I misunderstood something here -- if so please explain and help me understand it better]. Evaluation / Suggestions. Overall I liked the paper quite a bit. I may not have fully understood something about table 3 and would welcome some explanation about why the results shown there indicate that their method works better int he second example.", "role": "assistant" } ]
[ "Contributions: This paper develops an algorithm for verifying the robustness of transformers with self-attention layers when the inputs for one input word embedding are perturbed.", "Unlike previous work the present work can deal with cross nonlinearity and cross position dependency and the lower bounds derived in the paper are much tighter than the Interval Boundary Propagation (IBP) method which uses backward propagation.", "The core contribution is expounded by developing bounds for multiplication (xy) and division (x/y) and using this to compute tight bounds on self-attention layer computations.", "Further by introducing a forward process then combining it with a backward process, they can substantially reduce computation time as compared to the IBP method (in experiments demonstrated to be over an order of magnitude).", "Experimental results: Very reasonable and well thought out experiments demonstrate that a) the lower bounds produced by the method are very tight (much better than IBP for instance), and the backward&forward approach is an order of magnitude faster than the fully backward approach though they are both equally tight (the fully forward method is much looser by contrast) The experiments on using the bound to detect the word whose perturbation will cause biggest impact (ie the most important word for explaining the decision) is much less convincing.", "In particular in Table 3, the second example shows that the method identifiews *a*, *the* *our* etc, which at least do not appear to be most salient to this reviewer.", "[Question to authors: perhaps I misunderstood something here -- if so please explain and help me understand it better].", "Evaluation / Suggestions.", "Overall I liked the paper quite a bit.", "I may not have fully understood something about table 3 and would welcome some explanation about why the results shown there indicate that their method works better int he second example." ]
{ "criticism": 2, "example": 2, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 10 }
0.2
0.2
0.1
0.7
0.2
0.3
0.3
0.3
10
1
1
1
1
1
1
1
1
2.3
2.157958
0.142042
Kernel of CycleGAN as a principal homogeneous space
Unpaired image-to-image translation has attracted significant interest due to the invention of CycleGAN, a method which utilizes a combination of adversarial and cycle consistency losses to avoid the need for paired data. It is known that the CycleGAN problem might admit multiple solutions, and our goal in this paper is to analyze the space of exact solutions and to give perturbation bounds for approximate solutions. We show theoretically that the exact solution space is invariant with respect to automorphisms of the underlying probability spaces, and, furthermore, that the group of automorphisms acts freely and transitively on the space of exact solutions. We examine the case of zero pure CycleGAN loss first in its generality, and, subsequently, expand our analysis to approximate solutions for extended CycleGAN loss where identity loss term is included. In order to demonstrate that these results are applicable, we show that under mild conditions nontrivial smooth automorphisms exist. Furthermore, we provide empirical evidence that neural networks can learn these automorphisms with unexpected and unwanted results. We conclude that finding optimal solutions to the CycleGAN loss does not necessarily lead to the envisioned result in image-to-image translation tasks and that underlying hidden symmetries can render the result useless.
This paper focuses on CycleGAN method to show theoretically when the exact solution space is invariant with respect to authomorphisms of the underlying probability spaces for unpaired image-to-image translation. - The paper provides interesting theoretical results on identifying conditions under which CycleGAN admits nontrivial symmetries and has a natural structure of a principal homogeneous space. Proposition 2.1 provides interesting insights into the invariance of the kernel space. - Propositions 2.5 and 2.6 are interesting in that they show that the existence of authomorphisms can worsen the performance of CycleGAN, however, it is unclear that in practice, how could one verify the conditions efficiently before applying CycleGAN. - The experimental results are interesting, however, they are very limited. Having a toy experiment is a good sanity check, but it would be more interesting to see the performance on a real-world applications, such as medical images or other use-cases brought in the introduction. Also, more discussion on the results provided in Fig 3, confusion matrices, would be very helpful. Are there any intuitions behind the large and low values in the table? It could be interesting to see what are the effects of other parameters such as alpha in producing the results. - Overall this paper presents interesting results regarding the theory of CycleGANs, however, the numerical results are very limited, and do not justify the motivations discussed in the introduction and the abstract. Moreover, although the paper introduces novel attempts and theoretically analyzing the CycleGAN, the scope of the work seems to be limited, and thus, it does not have a sufficient significance to be published in ICLR. I strongly suggest the authors to expand and provide more experimental evaluations. ** update: Thanks for your comments! I found the additional experiment useful and better aligned with the purpose of the model. The discussion added clarified the confusion about the automorphism. That is why I decided to change my score.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nKernel of CycleGAN as a principal homogeneous space\nUnpaired image-to-image translation has attracted significant interest due to the invention of CycleGAN, a method which utilizes a combination of adversarial and cycle consistency losses to avoid the need for paired data. It is known that the CycleGAN problem might admit multiple solutions, and our goal in this paper is to analyze the space of exact solutions and to give perturbation bounds for approximate solutions. We show theoretically that the exact solution space is invariant with respect to automorphisms of the underlying probability spaces, and, furthermore, that the group of automorphisms acts freely and transitively on the space of exact solutions. We examine the case of zero pure CycleGAN loss first in its generality, and, subsequently, expand our analysis to approximate solutions for extended CycleGAN loss where identity loss term is included. In order to demonstrate that these results are applicable, we show that under mild conditions nontrivial smooth automorphisms exist. Furthermore, we provide empirical evidence that neural networks can learn these automorphisms with unexpected and unwanted results. We conclude that finding optimal solutions to the CycleGAN loss does not necessarily lead to the envisioned result in image-to-image translation tasks and that underlying hidden symmetries can render the result useless.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper focuses on CycleGAN method to show theoretically when the exact solution space is invariant with respect to authomorphisms of the underlying probability spaces for unpaired image-to-image translation. - The paper provides interesting theoretical results on identifying conditions under which CycleGAN admits nontrivial symmetries and has a natural structure of a principal homogeneous space. Proposition 2.1 provides interesting insights into the invariance of the kernel space. - Propositions 2.5 and 2.6 are interesting in that they show that the existence of authomorphisms can worsen the performance of CycleGAN, however, it is unclear that in practice, how could one verify the conditions efficiently before applying CycleGAN. - The experimental results are interesting, however, they are very limited. Having a toy experiment is a good sanity check, but it would be more interesting to see the performance on a real-world applications, such as medical images or other use-cases brought in the introduction. Also, more discussion on the results provided in Fig 3, confusion matrices, would be very helpful. Are there any intuitions behind the large and low values in the table? It could be interesting to see what are the effects of other parameters such as alpha in producing the results. - Overall this paper presents interesting results regarding the theory of CycleGANs, however, the numerical results are very limited, and do not justify the motivations discussed in the introduction and the abstract. Moreover, although the paper introduces novel attempts and theoretically analyzing the CycleGAN, the scope of the work seems to be limited, and thus, it does not have a sufficient significance to be published in ICLR. I strongly suggest the authors to expand and provide more experimental evaluations. ** update: Thanks for your comments! I found the additional experiment useful and better aligned with the purpose of the model. The discussion added clarified the confusion about the automorphism. That is why I decided to change my score.", "role": "assistant" } ]
[ "This paper focuses on CycleGAN method to show theoretically when the exact solution space is invariant with respect to authomorphisms of the underlying probability spaces for unpaired image-to-image translation.", "- The paper provides interesting theoretical results on identifying conditions under which CycleGAN admits nontrivial symmetries and has a natural structure of a principal homogeneous space.", "Proposition 2.1 provides interesting insights into the invariance of the kernel space.", "- Propositions 2.5 and 2.6 are interesting in that they show that the existence of authomorphisms can worsen the performance of CycleGAN, however, it is unclear that in practice, how could one verify the conditions efficiently before applying CycleGAN.", "- The experimental results are interesting, however, they are very limited.", "Having a toy experiment is a good sanity check, but it would be more interesting to see the performance on a real-world applications, such as medical images or other use-cases brought in the introduction.", "Also, more discussion on the results provided in Fig 3, confusion matrices, would be very helpful.", "Are there any intuitions behind the large and low values in the table?", "It could be interesting to see what are the effects of other parameters such as alpha in producing the results.", "- Overall this paper presents interesting results regarding the theory of CycleGANs, however, the numerical results are very limited, and do not justify the motivations discussed in the introduction and the abstract.", "Moreover, although the paper introduces novel attempts and theoretically analyzing the CycleGAN, the scope of the work seems to be limited, and thus, it does not have a sufficient significance to be published in ICLR.", "I strongly suggest the authors to expand and provide more experimental evaluations.", "** update: Thanks for your comments!", "I found the additional experiment useful and better aligned with the purpose of the model.", "The discussion added clarified the confusion about the automorphism.", "That is why I decided to change my score." ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 6, "materials_and_methods": 7, "praise": 6, "presentation_and_reporting": 1, "results_and_discussion": 9, "suggestion_and_solution": 4, "total": 16 }
0.25
0.0625
0.375
0.4375
0.375
0.0625
0.5625
0.25
16
1
1
1
1
1
1
1
1
2.375
2.232958
0.142042
Distributionally Robust Neural Networks
Overparameterized neural networks trained to minimize average loss can be highly accurate on average on an i.i.d. test set, yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that do not hold at test time). Distributionally robust optimization (DRO) provides an approach for learning models that instead minimize worst-case training loss over a set of pre-defined groups. We find, however, that naively applying DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss will also already have vanishing worst-case training loss. Instead, the poor worst-case performance of these models arises from poor generalization on some groups. As a solution, we show that increased regularization---e.g., stronger-than-typical weight decay or early stopping---allows DRO models to achieve substantially higher worst-group accuracies, with 10% to 40% improvements over standard models on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is critical for worst-group performance in the overparameterized regime, even if it is not needed for average performance. Finally, we introduce and provide convergence guarantees for a stochastic optimizer for this group DRO setting, underpinning the empirical study above.
To the best of my knowledge, this is the first paper to carefully address and propose an algorithm (with guarantees) for distributionally robust learning in the overparametrized regime, which is typical of modern large deep neural networks. Following other work, the paper formalizes distributionally robust learning as the minimization of a worst-case loss over a set of possible distributions. The main message of the paper is that for distributionally robust learning, regularization plays an important role by avoiding perfect fitting to the training data, at the cost of poor generalization (thus lack of robustness) in some of the possible distributions. Furthermore, the paper proposes a new stochastic optimization algorithm to minimize the loss that corresponds to distributionally robust learning and gives convergence guarantees for the proposed algorithm. I think this is an interesting and solid paper, with a clear presentation style, and well-supported contributions. To the best of my knowledge, this is novel work and, in my opinion, it is relevant work, both in terms of applicability as well as in terms of contribution to the understanding of the generalization behavior of overparameterized deep neural networks.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDistributionally Robust Neural Networks\nOverparameterized neural networks trained to minimize average loss can be highly accurate on average on an i.i.d. test set, yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that do not hold at test time). Distributionally robust optimization (DRO) provides an approach for learning models that instead minimize worst-case training loss over a set of pre-defined groups. We find, however, that naively applying DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss will also already have vanishing worst-case training loss. Instead, the poor worst-case performance of these models arises from poor generalization on some groups. As a solution, we show that increased regularization---e.g., stronger-than-typical weight decay or early stopping---allows DRO models to achieve substantially higher worst-group accuracies, with 10% to 40% improvements over standard models on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is critical for worst-group performance in the overparameterized regime, even if it is not needed for average performance. Finally, we introduce and provide convergence guarantees for a stochastic optimizer for this group DRO setting, underpinning the empirical study above.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nTo the best of my knowledge, this is the first paper to carefully address and propose an algorithm (with guarantees) for distributionally robust learning in the overparametrized regime, which is typical of modern large deep neural networks. Following other work, the paper formalizes distributionally robust learning as the minimization of a worst-case loss over a set of possible distributions. The main message of the paper is that for distributionally robust learning, regularization plays an important role by avoiding perfect fitting to the training data, at the cost of poor generalization (thus lack of robustness) in some of the possible distributions. Furthermore, the paper proposes a new stochastic optimization algorithm to minimize the loss that corresponds to distributionally robust learning and gives convergence guarantees for the proposed algorithm. I think this is an interesting and solid paper, with a clear presentation style, and well-supported contributions. To the best of my knowledge, this is novel work and, in my opinion, it is relevant work, both in terms of applicability as well as in terms of contribution to the understanding of the generalization behavior of overparameterized deep neural networks.", "role": "assistant" } ]
[ "To the best of my knowledge, this is the first paper to carefully address and propose an algorithm (with guarantees) for distributionally robust learning in the overparametrized regime, which is typical of modern large deep neural networks.", "Following other work, the paper formalizes distributionally robust learning as the minimization of a worst-case loss over a set of possible distributions.", "The main message of the paper is that for distributionally robust learning, regularization plays an important role by avoiding perfect fitting to the training data, at the cost of poor generalization (thus lack of robustness) in some of the possible distributions.", "Furthermore, the paper proposes a new stochastic optimization algorithm to minimize the loss that corresponds to distributionally robust learning and gives convergence guarantees for the proposed algorithm.", "I think this is an interesting and solid paper, with a clear presentation style, and well-supported contributions.", "To the best of my knowledge, this is novel work and, in my opinion, it is relevant work, both in terms of applicability as well as in terms of contribution to the understanding of the generalization behavior of overparameterized deep neural networks." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 3, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 0, "total": 6 }
0
0
0.666667
0.5
0.333333
0.166667
0.5
0
6
0
0
1
1
1
1
1
0
2.166667
1.393328
0.773339
Probability Calibration for Knowledge Graph Embedding Models
Knowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs. We propose to use Platt scaling and isotonic regression alongside our method. Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives. We get significantly better results than the uncalibrated models from all calibration methods. We show isotonic regression offers the best the performance overall, not without trade-offs. We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.
1. Summary The paper studies probability calibration for three different knowledge graph embedding methods, with a focus on TransE evaluated on the task of knowledge graph triple classification. It studies Brier and log loss performance of Platt scaling and isotonic regression probability calibration on WN11 for TransE and claims that better calibration yields better performance as measured by mean reciprocal rank. Calibration plots for other datasets are also included as evidence. Furthermore, evidence is presented that probability calibration can lead to better performance for the task of triple classification. The main contributions of the paper also include the adaption of sampling techniques introduced by Bordes et al. (2013) adapted for estimating negatives for probability calibrations. 2. Decision (See the updated decision in the comment below) Probability calibration is a very relevant issue, particularly in industry and when combining knowledge graph embedding models as external data in other models. Thus I see this work as a valuable contribution to the literature. In particular, I like the analysis from multiple views: Calibration plots, calibration metrics, and model performance. However, there is currently not enough evidence in the paper to make recommendations or judgments about when researchers and practitioners may want to use probability calibration. I also believe the datasets and models are not well tied into the literature, for example, in 2018/2019 I can find 3 papers for triple classification and 9 papers for link prediction as triple/entity ranking and from the data, it is not clear how probability calibration affects the latter. In the current state of the work, I recommend rejecting this work. 3. Further supporting arguments As a researcher and practitioner in this area, I know very well that predictions of most knowledge graph embedding models usually live near the decision boundary so that there is little difference between probability or score of a true positive and false positive. Also talking with people in industry, I heard that word embedding models are currently not that useful practically because they make too many useless predictions. This shows me that probability calibration is an important topic and I see this study as an important contribution to the field which is often mindlessly following evaluation metrics. However, from experience I also know that evaluation of knowledge graph embedding methods is not very reliable, that is people often get widely varying results and replication is difficult. Thus it is difficult to trust results if they are not well tied into the literature and compared against multiple datasets and models. This work focuses on three models TranseE, DistMult and ComplEx. DistMult and ComplEx have become models that are viewed as quite reliable to compare against. However, their performance is mostly studied on a learning-to-rank objective on datasets such as FB15k-237 and WN18RR. The authors report Brier score for their synthetic calibration method these datasets, but do not report any modeling results. Inclusions of results on these datasets would greatly improve this work. The authors also currently focus on establishing that probability calibration improves the performance of the models. They claim that low Brier score or log loss are tied to good performance, but Pairwise and Multiclass-NLL loses achieve similar Brier/log loss performance while the MRR is double for Multiclass-NLL compared to the Pairwise loss. NLL and Multiclass NLL losses have similar MRR but very different Brier/log loss performance. As such I do not think this claim is sufficiently substantiated. I do not believe it is necessary to establish that better probability calibration is correlated with better model performance. I view the careful study of probability calibration and its effects per se as more useful. As mentioned above, I also believe the results on WN11, FB13, and YAGO39k to not be sufficient to evaluate the effect of probability calibration. 4. Additional feedback I really like this work. I think adding more results would make this paper great and I would be happy to change my acceptance decision. As mentioned above I believe including results on FB15k-237 and WN18RR would make the results easier to interpret. Please also add more results to the table (no need to rerun those experiments, take them from other papers). I really like the analysis of Brier Score/Log loss and MRR. I think if you would extend this it would give very valuable insights into how probability calibration relates to performance. One additional experiment which I do not deem critical, but which would improve your work further would be to tie probability calibration into a more practical setting. A setting that is also very interesting to researchers is if probability calibration would affect the results in tasks where you use word embedding models as an external *knowledge source*. I really like Kumar et al., 2019[1] since their word embedding model integrated into an entity linking model beats a strong BERT baseline. But I think a study of any task/model of your choice that integrates a knowledge embedding model would be a valuable addition to your work. Again as I mentioned above, I do not believe it is critical to show improve performance on these tasks, a study of the effects of probability calibration is valuable in its own right. You might want to slightly pivot into this direction if you have sufficient evidence to make judgments about the effects of probability calibration. Further small details: In the introduction, you make specific claims and justify them by citing a survey paper (Nickel et al., 2016). It would be easier for the reader to look up these claims in the source rather than in the survey paper. I believe there is a typo in your derivation in equation (6): the denominator of the second term should be just w-N + N or N(w- + 1). [1] Zero-shot Word Sense Disambiguation using Sense Definition Embeddings: https://www.aclweb.org/anthology/P19-1568/
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nProbability Calibration for Knowledge Graph Embedding Models\nKnowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs. We propose to use Platt scaling and isotonic regression alongside our method. Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives. We get significantly better results than the uncalibrated models from all calibration methods. We show isotonic regression offers the best the performance overall, not without trade-offs. We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n1. Summary The paper studies probability calibration for three different knowledge graph embedding methods, with a focus on TransE evaluated on the task of knowledge graph triple classification. It studies Brier and log loss performance of Platt scaling and isotonic regression probability calibration on WN11 for TransE and claims that better calibration yields better performance as measured by mean reciprocal rank. Calibration plots for other datasets are also included as evidence. Furthermore, evidence is presented that probability calibration can lead to better performance for the task of triple classification. The main contributions of the paper also include the adaption of sampling techniques introduced by Bordes et al. (2013) adapted for estimating negatives for probability calibrations. 2. Decision (See the updated decision in the comment below) Probability calibration is a very relevant issue, particularly in industry and when combining knowledge graph embedding models as external data in other models. Thus I see this work as a valuable contribution to the literature. In particular, I like the analysis from multiple views: Calibration plots, calibration metrics, and model performance. However, there is currently not enough evidence in the paper to make recommendations or judgments about when researchers and practitioners may want to use probability calibration. I also believe the datasets and models are not well tied into the literature, for example, in 2018/2019 I can find 3 papers for triple classification and 9 papers for link prediction as triple/entity ranking and from the data, it is not clear how probability calibration affects the latter. In the current state of the work, I recommend rejecting this work. 3. Further supporting arguments As a researcher and practitioner in this area, I know very well that predictions of most knowledge graph embedding models usually live near the decision boundary so that there is little difference between probability or score of a true positive and false positive. Also talking with people in industry, I heard that word embedding models are currently not that useful practically because they make too many useless predictions. This shows me that probability calibration is an important topic and I see this study as an important contribution to the field which is often mindlessly following evaluation metrics. However, from experience I also know that evaluation of knowledge graph embedding methods is not very reliable, that is people often get widely varying results and replication is difficult. Thus it is difficult to trust results if they are not well tied into the literature and compared against multiple datasets and models. This work focuses on three models TranseE, DistMult and ComplEx. DistMult and ComplEx have become models that are viewed as quite reliable to compare against. However, their performance is mostly studied on a learning-to-rank objective on datasets such as FB15k-237 and WN18RR. The authors report Brier score for their synthetic calibration method these datasets, but do not report any modeling results. Inclusions of results on these datasets would greatly improve this work. The authors also currently focus on establishing that probability calibration improves the performance of the models. They claim that low Brier score or log loss are tied to good performance, but Pairwise and Multiclass-NLL loses achieve similar Brier/log loss performance while the MRR is double for Multiclass-NLL compared to the Pairwise loss. NLL and Multiclass NLL losses have similar MRR but very different Brier/log loss performance. As such I do not think this claim is sufficiently substantiated. I do not believe it is necessary to establish that better probability calibration is correlated with better model performance. I view the careful study of probability calibration and its effects per se as more useful. As mentioned above, I also believe the results on WN11, FB13, and YAGO39k to not be sufficient to evaluate the effect of probability calibration. 4. Additional feedback I really like this work. I think adding more results would make this paper great and I would be happy to change my acceptance decision. As mentioned above I believe including results on FB15k-237 and WN18RR would make the results easier to interpret. Please also add more results to the table (no need to rerun those experiments, take them from other papers). I really like the analysis of Brier Score/Log loss and MRR. I think if you would extend this it would give very valuable insights into how probability calibration relates to performance. One additional experiment which I do not deem critical, but which would improve your work further would be to tie probability calibration into a more practical setting. A setting that is also very interesting to researchers is if probability calibration would affect the results in tasks where you use word embedding models as an external *knowledge source*. I really like Kumar et al., 2019[1] since their word embedding model integrated into an entity linking model beats a strong BERT baseline. But I think a study of any task/model of your choice that integrates a knowledge embedding model would be a valuable addition to your work. Again as I mentioned above, I do not believe it is critical to show improve performance on these tasks, a study of the effects of probability calibration is valuable in its own right. You might want to slightly pivot into this direction if you have sufficient evidence to make judgments about the effects of probability calibration. Further small details: In the introduction, you make specific claims and justify them by citing a survey paper (Nickel et al., 2016). It would be easier for the reader to look up these claims in the source rather than in the survey paper. I believe there is a typo in your derivation in equation (6): the denominator of the second term should be just w-N + N or N(w- + 1). [1] Zero-shot Word Sense Disambiguation using Sense Definition Embeddings: https://www.aclweb.org/anthology/P19-1568/", "role": "assistant" } ]
[ "1.", "Summary The paper studies probability calibration for three different knowledge graph embedding methods, with a focus on TransE evaluated on the task of knowledge graph triple classification.", "It studies Brier and log loss performance of Platt scaling and isotonic regression probability calibration on WN11 for TransE and claims that better calibration yields better performance as measured by mean reciprocal rank.", "Calibration plots for other datasets are also included as evidence.", "Furthermore, evidence is presented that probability calibration can lead to better performance for the task of triple classification.", "The main contributions of the paper also include the adaption of sampling techniques introduced by Bordes et al.", "(2013) adapted for estimating negatives for probability calibrations.", "2.", "Decision (See the updated decision in the comment below) Probability calibration is a very relevant issue, particularly in industry and when combining knowledge graph embedding models as external data in other models.", "Thus I see this work as a valuable contribution to the literature.", "In particular, I like the analysis from multiple views: Calibration plots, calibration metrics, and model performance.", "However, there is currently not enough evidence in the paper to make recommendations or judgments about when researchers and practitioners may want to use probability calibration.", "I also believe the datasets and models are not well tied into the literature, for example, in 2018/2019 I can find 3 papers for triple classification and 9 papers for link prediction as triple/entity ranking and from the data, it is not clear how probability calibration affects the latter.", "In the current state of the work, I recommend rejecting this work.", "3.", "Further supporting arguments As a researcher and practitioner in this area, I know very well that predictions of most knowledge graph embedding models usually live near the decision boundary so that there is little difference between probability or score of a true positive and false positive.", "Also talking with people in industry, I heard that word embedding models are currently not that useful practically because they make too many useless predictions.", "This shows me that probability calibration is an important topic and I see this study as an important contribution to the field which is often mindlessly following evaluation metrics.", "However, from experience I also know that evaluation of knowledge graph embedding methods is not very reliable, that is people often get widely varying results and replication is difficult.", "Thus it is difficult to trust results if they are not well tied into the literature and compared against multiple datasets and models.", "This work focuses on three models TranseE, DistMult and ComplEx.", "DistMult and ComplEx have become models that are viewed as quite reliable to compare against.", "However, their performance is mostly studied on a learning-to-rank objective on datasets such as FB15k-237 and WN18RR.", "The authors report Brier score for their synthetic calibration method these datasets, but do not report any modeling results.", "Inclusions of results on these datasets would greatly improve this work.", "The authors also currently focus on establishing that probability calibration improves the performance of the models.", "They claim that low Brier score or log loss are tied to good performance, but Pairwise and Multiclass-NLL loses achieve similar Brier/log loss performance while the MRR is double for Multiclass-NLL compared to the Pairwise loss.", "NLL and Multiclass NLL losses have similar MRR but very different Brier/log loss performance.", "As such I do not think this claim is sufficiently substantiated.", "I do not believe it is necessary to establish that better probability calibration is correlated with better model performance.", "I view the careful study of probability calibration and its effects per se as more useful.", "As mentioned above, I also believe the results on WN11, FB13, and YAGO39k to not be sufficient to evaluate the effect of probability calibration.", "4.", "Additional feedback I really like this work.", "I think adding more results would make this paper great and I would be happy to change my acceptance decision.", "As mentioned above I believe including results on FB15k-237 and WN18RR would make the results easier to interpret.", "Please also add more results to the table (no need to rerun those experiments, take them from other papers).", "I really like the analysis of Brier Score/Log loss and MRR.", "I think if you would extend this it would give very valuable insights into how probability calibration relates to performance.", "One additional experiment which I do not deem critical, but which would improve your work further would be to tie probability calibration into a more practical setting.", "A setting that is also very interesting to researchers is if probability calibration would affect the results in tasks where you use word embedding models as an external *knowledge source*.", "I really like Kumar et al., 2019[1] since their word embedding model integrated into an entity linking model beats a strong BERT baseline.", "But I think a study of any task/model of your choice that integrates a knowledge embedding model would be a valuable addition to your work.", "Again as I mentioned above, I do not believe it is critical to show improve performance on these tasks, a study of the effects of probability calibration is valuable in its own right.", "You might want to slightly pivot into this direction if you have sufficient evidence to make judgments about the effects of probability calibration.", "Further small details: In the introduction, you make specific claims and justify them by citing a survey paper (Nickel et al., 2016).", "It would be easier for the reader to look up these claims in the source rather than in the survey paper.", "I believe there is a typo in your derivation in equation (6): the denominator of the second term should be just w-N + N or N(w- + 1).", "[1] Zero-shot Word Sense Disambiguation using Sense Definition Embeddings: https://www.aclweb.org/anthology/P19-1568/" ]
{ "criticism": 11, "example": 0, "importance_and_relevance": 12, "materials_and_methods": 35, "praise": 10, "presentation_and_reporting": 6, "results_and_discussion": 22, "suggestion_and_solution": 14, "total": 49 }
0.22449
0
0.244898
0.714286
0.204082
0.122449
0.44898
0.285714
49
1
0
1
1
1
1
1
1
2.244898
-18.209131
20.454029
Why Not to Use Zero Imputation? Correcting Sparsity Bias in Training Neural Networks
Handling missing data is one of the most fundamental problems in machine learning. Among many approaches, the simplest and most intuitive way is zero imputation, which treats the value of a missing entry simply as zero. However, many studies have experimentally confirmed that zero imputation results in suboptimal performances in training neural networks. Yet, none of the existing work has explained what brings such performance degradations. In this paper, we introduce the variable sparsity problem (VSP), which describes a phenomenon where the output of a predictive model largely varies with respect to the rate of missingness in the given input, and show that it adversarially affects the model performance. We first theoretically analyze this phenomenon and propose a simple yet effective technique to handle missingness, which we refer to as Sparsity Normalization (SN), that directly targets and resolves the VSP. We further experimentally validate SN on diverse benchmark datasets, to show that debiasing the effect of input-level sparsity improves the performance and stabilizes the training of neural networks.
This paper studies a very interesting phenomena in machine learning called VSP, that is the output of the model is highly affected via the level of missing values in its input. The authors demonstrate the existence of such phenomena empirically, analyze the root cause for it theoretically, and propose a simple yet effective normalization method to tackle the problem. Several experiments demonstrate the effectiveness of this method. In general I think the paper is descent and elegant. It is motivated from real-world pain-point, gives a rigorous study towards the root cause, and the proposed method is very effective. To the best of my knowledge there is no prior work looking deep into this area and this paper does bring new insights to the community. As a result I would vote for its acceptance. One issue is that I find the backbone methods in experiments are somehow out-of-date. For example, AutoRec (2015) and CF-NADE (2016). I admit that I’m not an expert in the field of recommendation but still think that more recent, and powerful baseline algorithms should be applied on to further demonstrate the true effectiveness of Sparsity Normalization.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWhy Not to Use Zero Imputation? Correcting Sparsity Bias in Training Neural Networks\nHandling missing data is one of the most fundamental problems in machine learning. Among many approaches, the simplest and most intuitive way is zero imputation, which treats the value of a missing entry simply as zero. However, many studies have experimentally confirmed that zero imputation results in suboptimal performances in training neural networks. Yet, none of the existing work has explained what brings such performance degradations. In this paper, we introduce the variable sparsity problem (VSP), which describes a phenomenon where the output of a predictive model largely varies with respect to the rate of missingness in the given input, and show that it adversarially affects the model performance. We first theoretically analyze this phenomenon and propose a simple yet effective technique to handle missingness, which we refer to as Sparsity Normalization (SN), that directly targets and resolves the VSP. We further experimentally validate SN on diverse benchmark datasets, to show that debiasing the effect of input-level sparsity improves the performance and stabilizes the training of neural networks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper studies a very interesting phenomena in machine learning called VSP, that is the output of the model is highly affected via the level of missing values in its input. The authors demonstrate the existence of such phenomena empirically, analyze the root cause for it theoretically, and propose a simple yet effective normalization method to tackle the problem. Several experiments demonstrate the effectiveness of this method. In general I think the paper is descent and elegant. It is motivated from real-world pain-point, gives a rigorous study towards the root cause, and the proposed method is very effective. To the best of my knowledge there is no prior work looking deep into this area and this paper does bring new insights to the community. As a result I would vote for its acceptance. One issue is that I find the backbone methods in experiments are somehow out-of-date. For example, AutoRec (2015) and CF-NADE (2016). I admit that I’m not an expert in the field of recommendation but still think that more recent, and powerful baseline algorithms should be applied on to further demonstrate the true effectiveness of Sparsity Normalization.", "role": "assistant" } ]
[ "This paper studies a very interesting phenomena in machine learning called VSP, that is the output of the model is highly affected via the level of missing values in its input.", "The authors demonstrate the existence of such phenomena empirically, analyze the root cause for it theoretically, and propose a simple yet effective normalization method to tackle the problem.", "Several experiments demonstrate the effectiveness of this method.", "In general I think the paper is descent and elegant.", "It is motivated from real-world pain-point, gives a rigorous study towards the root cause, and the proposed method is very effective.", "To the best of my knowledge there is no prior work looking deep into this area and this paper does bring new insights to the community.", "As a result I would vote for its acceptance.", "One issue is that I find the backbone methods in experiments are somehow out-of-date.", "For example, AutoRec (2015) and CF-NADE (2016).", "I admit that I’m not an expert in the field of recommendation but still think that more recent, and powerful baseline algorithms should be applied on to further demonstrate the true effectiveness of Sparsity Normalization." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 6, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 10 }
0.2
0
0.5
0.6
0.4
0.1
0.3
0.1
10
1
0
1
1
1
1
1
1
2.2
2.057958
0.142042
DropEdge: Towards Deep Graph Convolutional Networks on Node Classification
Over-fitting and over-smoothing are two main obstacles of developing deep Graph Convolutional Networks (GCNs) for node classification. In particular, over-fitting weakens the generalization ability on small dataset, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. This paper proposes DropEdge, a novel and flexible technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graph at each training epoch, acting like a data augmenter and also a message passing reducer. Furthermore, we theoretically demonstrate that DropEdge either retards the convergence speed of over-smoothing or relieves the information loss caused by it. More importantly, our DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance. Extensive experiments on several benchmarks verify that DropEdge consistently improves the performance on a variety of both shallow and deep GCNs. The effect of DropEdge on preventing over-smoothing is empirically visualized and validated as well. Codes will be made public upon the publication.
The authors propose a simple but effective strategy that aims to alleviate not only overfitting, but also feature degradation (oversmoothing) in deep graph convolutional networks (GCNs). Inspired by dropout in traditional MLPs and convnets, the authors clearly motivate their contribution in terms of alleviating both overfitting and oversmoothing, which are problems established both in previous literature as well as validated empirically by the authors. Ultimately, the authors provide solid empirical evidence that, while a bit heuristic, their method is effective at alleviating at least partially the issues of overfitting and oversmoothing. I vote weak-accept in light of convincing empirical results, some theoretical exploration of the method*s properties, but limited novelty. Pros: Simple, intuitive method Draws from existing literature relating to dropout-like methods Little computational overhead Solid experimental justification Some theoretical support for the method Cons: Method is somewhat heuristic Mitigates, rather than solves, the issue of oversmoothing Limited novelty (straightforward extension of dropout to graphs edges) Unclear why dropping edges is *valid* augmentation Followup-questions/areas for improving score: It would be nice to have a principled way of choosing the dropout proportion; 0.8 is chosen somewhat arbitrarily by the authors (presumably because it generally performed well). There is at least a nice interpretation of choosing 0.5 for the dropout proportion in regular dropout (maximum regularization). As brought up in the comments, edges to drop out to the graph*s properties is an interesting direction to explore. While the authors state that they would like to keep the method simple and general, the method is ultimately devised as an adaptation of dropout to graphs, so exploiting graph-specific properties seems reasonable and a potential avenue to further improving performance. p2: *First, DropEdge can be considered as a data augmentation technique* Why are these augmentations valid; why should the output of the network be invariant to these augmentations? I would like to see some justification for why the proposed random modification of the graph structure is valid; intuitively, it seems like it might make the learning problem impossible in some cases. Deeper analysis of the (more interesting, I think) layer-independent regime would be nice. (As a side-note, the name *layer-independent* for this regime is a bit confusing, as the edges dropped out *do* depend on the layer here, whereas in the *layer dependent* regime, edges dropped out do *not* depend on the layer). Comments: Figure 1 could probably be re-organized to better highlight the comparison between GCNs with and without DropEdge; consolidating the content into 2 figures instead of 4 might be more easily parsable. Adding figure-specific captions and defining the x axis would also be nice. Use *reduce* in place of *retard* p2 * With contending the scalability* improve phrasing p2 *By recent,* -> *Recently,* p2 *difficulty on* -> *difficulty in* p2 * deep networks lying* -> *deep networks lies* p3 *which is a generation of the conclusion* improve phrasing p3 * disconnected between* -> *disconnected from* p4 *adjacent matrix* -> *adjacency matrix* p4 *severer * -> *more severe* p5 *but has no help* -> *but is no help* p5 *no touch to the adjacency matrix* -> improve phrasing
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDropEdge: Towards Deep Graph Convolutional Networks on Node Classification\nOver-fitting and over-smoothing are two main obstacles of developing deep Graph Convolutional Networks (GCNs) for node classification. In particular, over-fitting weakens the generalization ability on small dataset, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. This paper proposes DropEdge, a novel and flexible technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graph at each training epoch, acting like a data augmenter and also a message passing reducer. Furthermore, we theoretically demonstrate that DropEdge either retards the convergence speed of over-smoothing or relieves the information loss caused by it. More importantly, our DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance. Extensive experiments on several benchmarks verify that DropEdge consistently improves the performance on a variety of both shallow and deep GCNs. The effect of DropEdge on preventing over-smoothing is empirically visualized and validated as well. Codes will be made public upon the publication.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a simple but effective strategy that aims to alleviate not only overfitting, but also feature degradation (oversmoothing) in deep graph convolutional networks (GCNs). Inspired by dropout in traditional MLPs and convnets, the authors clearly motivate their contribution in terms of alleviating both overfitting and oversmoothing, which are problems established both in previous literature as well as validated empirically by the authors. Ultimately, the authors provide solid empirical evidence that, while a bit heuristic, their method is effective at alleviating at least partially the issues of overfitting and oversmoothing. I vote weak-accept in light of convincing empirical results, some theoretical exploration of the method*s properties, but limited novelty. Pros: Simple, intuitive method Draws from existing literature relating to dropout-like methods Little computational overhead Solid experimental justification Some theoretical support for the method Cons: Method is somewhat heuristic Mitigates, rather than solves, the issue of oversmoothing Limited novelty (straightforward extension of dropout to graphs edges) Unclear why dropping edges is *valid* augmentation Followup-questions/areas for improving score: It would be nice to have a principled way of choosing the dropout proportion; 0.8 is chosen somewhat arbitrarily by the authors (presumably because it generally performed well). There is at least a nice interpretation of choosing 0.5 for the dropout proportion in regular dropout (maximum regularization). As brought up in the comments, edges to drop out to the graph*s properties is an interesting direction to explore. While the authors state that they would like to keep the method simple and general, the method is ultimately devised as an adaptation of dropout to graphs, so exploiting graph-specific properties seems reasonable and a potential avenue to further improving performance. p2: *First, DropEdge can be considered as a data augmentation technique* Why are these augmentations valid; why should the output of the network be invariant to these augmentations? I would like to see some justification for why the proposed random modification of the graph structure is valid; intuitively, it seems like it might make the learning problem impossible in some cases. Deeper analysis of the (more interesting, I think) layer-independent regime would be nice. (As a side-note, the name *layer-independent* for this regime is a bit confusing, as the edges dropped out *do* depend on the layer here, whereas in the *layer dependent* regime, edges dropped out do *not* depend on the layer). Comments: Figure 1 could probably be re-organized to better highlight the comparison between GCNs with and without DropEdge; consolidating the content into 2 figures instead of 4 might be more easily parsable. Adding figure-specific captions and defining the x axis would also be nice. Use *reduce* in place of *retard* p2 * With contending the scalability* improve phrasing p2 *By recent,* -> *Recently,* p2 *difficulty on* -> *difficulty in* p2 * deep networks lying* -> *deep networks lies* p3 *which is a generation of the conclusion* improve phrasing p3 * disconnected between* -> *disconnected from* p4 *adjacent matrix* -> *adjacency matrix* p4 *severer * -> *more severe* p5 *but has no help* -> *but is no help* p5 *no touch to the adjacency matrix* -> improve phrasing", "role": "assistant" } ]
[ "The authors propose a simple but effective strategy that aims to alleviate not only overfitting, but also feature degradation (oversmoothing) in deep graph convolutional networks (GCNs).", "Inspired by dropout in traditional MLPs and convnets, the authors clearly motivate their contribution in terms of alleviating both overfitting and oversmoothing, which are problems established both in previous literature as well as validated empirically by the authors.", "Ultimately, the authors provide solid empirical evidence that, while a bit heuristic, their method is effective at alleviating at least partially the issues of overfitting and oversmoothing.", "I vote weak-accept in light of convincing empirical results, some theoretical exploration of the method*s properties, but limited novelty.", "Pros: Simple, intuitive method Draws from existing literature relating to dropout-like methods Little computational overhead Solid experimental justification Some theoretical support for the method Cons: Method is somewhat heuristic Mitigates, rather than solves, the issue of oversmoothing Limited novelty (straightforward extension of dropout to graphs edges) Unclear why dropping edges is *valid* augmentation Followup-questions/areas for improving score: It would be nice to have a principled way of choosing the dropout proportion; 0.8 is chosen somewhat arbitrarily by the authors (presumably because it generally performed well).", "There is at least a nice interpretation of choosing 0.5 for the dropout proportion in regular dropout (maximum regularization).", "As brought up in the comments, edges to drop out to the graph*s properties is an interesting direction to explore.", "While the authors state that they would like to keep the method simple and general, the method is ultimately devised as an adaptation of dropout to graphs, so exploiting graph-specific properties seems reasonable and a potential avenue to further improving performance.", "p2: *First, DropEdge can be considered as a data augmentation technique* Why are these augmentations valid; why should the output of the network be invariant to these augmentations?", "I would like to see some justification for why the proposed random modification of the graph structure is valid; intuitively, it seems like it might make the learning problem impossible in some cases.", "Deeper analysis of the (more interesting, I think) layer-independent regime would be nice.", "(As a side-note, the name *layer-independent* for this regime is a bit confusing, as the edges dropped out *do* depend on the layer here, whereas in the *layer dependent* regime, edges dropped out do *not* depend on the layer).", "Comments: Figure 1 could probably be re-organized to better highlight the comparison between GCNs with and without DropEdge; consolidating the content into 2 figures instead of 4 might be more easily parsable.", "Adding figure-specific captions and defining the x axis would also be nice.", "Use *reduce* in place of *retard* p2 * With contending the scalability* improve phrasing p2 *By recent,* -> *Recently,* p2 *difficulty on* -> *difficulty in* p2 * deep networks lying* -> *deep networks lies* p3 *which is a generation of the conclusion* improve phrasing p3 * disconnected between* -> *disconnected from* p4 *adjacent matrix* -> *adjacency matrix* p4 *severer * -> *more severe* p5 *but has no help* -> *but is no help* p5 *no touch to the adjacency matrix* -> improve phrasing" ]
{ "criticism": 2, "example": 2, "importance_and_relevance": 4, "materials_and_methods": 9, "praise": 4, "presentation_and_reporting": 4, "results_and_discussion": 6, "suggestion_and_solution": 6, "total": 15 }
0.133333
0.133333
0.266667
0.6
0.266667
0.266667
0.4
0.4
15
1
1
1
1
1
1
1
1
2.466667
2.403537
0.06313
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks
Ensuring robustness of Deep Neural Networks (DNNs) is crucial to their adoption in safety-critical applications such as self-driving cars, drones, and healthcare. Notably, DNNs are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications. In this work, we propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks. EMPIR is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. EMPIR overcomes this limitation to achieve the “best of both worlds”, i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble. Further, as low precision DNN models have significantly lower computational and storage requirements than full precision models, EMPIR models only incur modest compute and memory overheads compared to a single full-precision model (<25% in our evaluations). We evaluate EMPIR across a suite of 3 different DNN tasks (MNIST, CIFAR-10 and ImageNet) and under 4 different adversarial attacks. Our results indicate that EMPIR boosts the average adversarial accuracies by 43.6%, 15.3% and 11.9% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively, when compared to single full-precision models, without sacrificing accuracy on the unperturbed inputs.
This paper suggests using ensemble of both full-precision and low-bits precision models to defense adversarial examples. From methodological point of view, this idea is quite straightforward and not novel, since there are already several works that applied ensemble methods to improve the robustness of NNs, including the Strauss et.al 2017 and (the following references are not included in the manuscript) *Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong Warren He, James Wei, Xinyun Chen, Nicholas Carlini, Dawn Song* *Ensemble Adversarial Training: Attacks and Defenses Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel* . *Improving Adversarial Robustness via Promoting Ensemble Diversity Tianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu * ICML 2019 Though these methods only considered combining full-precision models, the idea is the same in essence and let the low-bits networks involve into the ensemble is quite natural and straightforward. So I don*t think the methodology contribution of this paper is enough for publication. When checking the empirical results, the compared baselines miss a very common-used and strong baseline PGD adversarial training. And also the performance of this ensemble is not significant. Considering the weakness of the paper both in methodology development and empirical justification, this work does not merit publication from my point of view.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks\nEnsuring robustness of Deep Neural Networks (DNNs) is crucial to their adoption in safety-critical applications such as self-driving cars, drones, and healthcare. Notably, DNNs are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications. In this work, we propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks. EMPIR is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. EMPIR overcomes this limitation to achieve the “best of both worlds”, i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble. Further, as low precision DNN models have significantly lower computational and storage requirements than full precision models, EMPIR models only incur modest compute and memory overheads compared to a single full-precision model (<25% in our evaluations). We evaluate EMPIR across a suite of 3 different DNN tasks (MNIST, CIFAR-10 and ImageNet) and under 4 different adversarial attacks. Our results indicate that EMPIR boosts the average adversarial accuracies by 43.6%, 15.3% and 11.9% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively, when compared to single full-precision models, without sacrificing accuracy on the unperturbed inputs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper suggests using ensemble of both full-precision and low-bits precision models to defense adversarial examples. From methodological point of view, this idea is quite straightforward and not novel, since there are already several works that applied ensemble methods to improve the robustness of NNs, including the Strauss et.al 2017 and (the following references are not included in the manuscript) *Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong Warren He, James Wei, Xinyun Chen, Nicholas Carlini, Dawn Song* *Ensemble Adversarial Training: Attacks and Defenses Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel* . *Improving Adversarial Robustness via Promoting Ensemble Diversity Tianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu * ICML 2019 Though these methods only considered combining full-precision models, the idea is the same in essence and let the low-bits networks involve into the ensemble is quite natural and straightforward. So I don*t think the methodology contribution of this paper is enough for publication. When checking the empirical results, the compared baselines miss a very common-used and strong baseline PGD adversarial training. And also the performance of this ensemble is not significant. Considering the weakness of the paper both in methodology development and empirical justification, this work does not merit publication from my point of view.", "role": "assistant" } ]
[ "This paper suggests using ensemble of both full-precision and low-bits precision models to defense adversarial examples.", "From methodological point of view, this idea is quite straightforward and not novel, since there are already several works that applied ensemble methods to improve the robustness of NNs, including the Strauss et.al 2017 and (the following references are not included in the manuscript) *Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong Warren He, James Wei, Xinyun Chen, Nicholas Carlini, Dawn Song* *Ensemble Adversarial Training: Attacks and Defenses Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel* .", "*Improving Adversarial Robustness via Promoting Ensemble Diversity Tianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu * ICML 2019 Though these methods only considered combining full-precision models, the idea is the same in essence and let the low-bits networks involve into the ensemble is quite natural and straightforward.", "So I don*t think the methodology contribution of this paper is enough for publication.", "When checking the empirical results, the compared baselines miss a very common-used and strong baseline PGD adversarial training.", "And also the performance of this ensemble is not significant.", "Considering the weakness of the paper both in methodology development and empirical justification, this work does not merit publication from my point of view." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 7 }
0.571429
0
0.428571
0.857143
0.142857
0
0.142857
0
7
1
0
1
1
1
0
1
0
2.142857
1.57469
0.568167
Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel
Neural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough: the uncertainty (i.e. risk or confidence) of that prediction must also be estimated. Standard NNs, which are most often used in such tasks, do not provide uncertainty information. Existing approaches address this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not predict as accurately as standard NNs. In this paper, a new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN. The behavior of the NN is captured by modeling its prediction residuals with a Gaussian Process, whose kernel includes both the NN*s input and its output. The framework is justified theoretically and evaluated in twelve real-world datasets, where it is found to (1) provide reliable estimates of uncertainty, (2) reduce the error of the point predictions, and (3) scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient for building real-world NN applications.
This paper solves an interesting scientific and applied problem: can we construct an algorithm to predict uncertainties without re-training/modifying existing neural network training algos? The authors propose a novel technique (called RIO) which leverages existing neural network but use both the input as well as the output of the neural net as an input to a GP which regresses on the residual error of the neural network. The authors describe the theoretical foundations as well as show empirical results on multiple datasets. My thoughts on the paper: - The paper is well written and from section 2.1 it is clear how one could re-produce their method. - The theoretical section 2.2 feels a bit rushed, I think it would be worth sharing the high level intuition behind some of the theory first before going into the details. - Section 2.4 could be more explicit about what *large scale* means. I.o.w. from a practical point of view, the method is only limited by approximate inference for Gaussian processes. Anno 2019 this is ... - The empirical section is particularly strong and contains a wide variety of experiments with detailed analysis. As a result, I think this is a good piece of scientific work that could be interesting to the wider community. Although I did not re-run the results, the authors do share full source code for their results.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQuantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel\nNeural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough: the uncertainty (i.e. risk or confidence) of that prediction must also be estimated. Standard NNs, which are most often used in such tasks, do not provide uncertainty information. Existing approaches address this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not predict as accurately as standard NNs. In this paper, a new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN. The behavior of the NN is captured by modeling its prediction residuals with a Gaussian Process, whose kernel includes both the NN*s input and its output. The framework is justified theoretically and evaluated in twelve real-world datasets, where it is found to (1) provide reliable estimates of uncertainty, (2) reduce the error of the point predictions, and (3) scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient for building real-world NN applications.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper solves an interesting scientific and applied problem: can we construct an algorithm to predict uncertainties without re-training/modifying existing neural network training algos? The authors propose a novel technique (called RIO) which leverages existing neural network but use both the input as well as the output of the neural net as an input to a GP which regresses on the residual error of the neural network. The authors describe the theoretical foundations as well as show empirical results on multiple datasets. My thoughts on the paper: - The paper is well written and from section 2.1 it is clear how one could re-produce their method. - The theoretical section 2.2 feels a bit rushed, I think it would be worth sharing the high level intuition behind some of the theory first before going into the details. - Section 2.4 could be more explicit about what *large scale* means. I.o.w. from a practical point of view, the method is only limited by approximate inference for Gaussian processes. Anno 2019 this is ... - The empirical section is particularly strong and contains a wide variety of experiments with detailed analysis. As a result, I think this is a good piece of scientific work that could be interesting to the wider community. Although I did not re-run the results, the authors do share full source code for their results.", "role": "assistant" } ]
[ "This paper solves an interesting scientific and applied problem: can we construct an algorithm to predict uncertainties without re-training/modifying existing neural network training algos?", "The authors propose a novel technique (called RIO) which leverages existing neural network but use both the input as well as the output of the neural net as an input to a GP which regresses on the residual error of the neural network.", "The authors describe the theoretical foundations as well as show empirical results on multiple datasets.", "My thoughts on the paper: - The paper is well written and from section 2.1 it is clear how one could re-produce their method.", "- The theoretical section 2.2 feels a bit rushed, I think it would be worth sharing the high level intuition behind some of the theory first before going into the details.", "- Section 2.4 could be more explicit about what *large scale* means.", "I.o.w.", "from a practical point of view, the method is only limited by approximate inference for Gaussian processes.", "Anno 2019 this is ... - The empirical section is particularly strong and contains a wide variety of experiments with detailed analysis.", "As a result, I think this is a good piece of scientific work that could be interesting to the wider community.", "Although I did not re-run the results, the authors do share full source code for their results." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 3, "materials_and_methods": 6, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 6, "suggestion_and_solution": 3, "total": 11 }
0.181818
0.090909
0.272727
0.545455
0.363636
0.181818
0.545455
0.272727
11
1
1
1
1
1
1
1
1
2.454545
2.391416
0.06313
Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Generative adversarial networks (GANs) have shown great success in applications such as image generation and inpainting. However, they typically require large datasets, which are often not available, especially in the context of prediction tasks such as image segmentation that require labels. Therefore, methods such as the CycleGAN use more easily available unlabelled data, but do not offer a way to leverage additional labelled data for improved performance. To address this shortcoming, we show how to factorise the joint data distribution into a set of lower-dimensional distributions along with their dependencies. This allows splitting the discriminator in a GAN into multiple *sub-discriminators* that can be independently trained from incomplete observations. Their outputs can be combined to estimate the density ratio between the joint real and the generator distribution, which enables training generators as in the original GAN framework. We apply our method to image generation, image segmentation and audio source separation, and obtain improved performance over a standard GAN when additional incomplete training examples are available. For the Cityscapes segmentation task in particular, our method also improves accuracy by an absolute 13.6% over CycleGAN while using only 25 additional paired examples.
I found this paper very easy and clear to follow - the authors present, what I believe to be an elegant, approach to training a GAN in the presence of missing data or where many marginal samples might be available but very few complete (e.g. paired) samples. The approach proceeds by identifying that the joint distributions (true and approximate) can be factored so as to yield a number of different density ratios which can then be estimated by specific discriminators; in particular, these include the appropriate marginal density ratios and then corresponding overall correction factors. As a caveat to the review I should point out that while I am familiar with GANs, they are not my main area of expertise so this should be taken into consideration - apologies if there is literature I have missed. Experiments: The authors provide a number of illustrative experiments that demonstrate the efficacy of the approach across a number of tasks. There are many differing GAN models but due to the nature of the problem I don*t have a big issue with the majority of the comparisons being against a standard GAN since the tasks are suitably designed. For the paired MNIST experiment I found it hard to assess the qualitative results visually and am always concerned about the ad-hoc nature of Inception Distances - I find it difficult to attribute weight to them quantitatively since they are usually being used to assess things where they might suffer from a common error (e.g. they are both based on NNs). Also, I*m not fully on board with the dependency metric in (5) but then the authors also point out the same concerns. The other experiments I found more convincing. I appreciated having error bars on some of the plots to help assess significance - would it not be possible to put error bars on all plots? I found the additional extensions presented in the appendix to be interesting ideas as well and would be interested to see how the approach works with other GAN objectives as mentioned for future work. I am mostly very positive about this work - my main concern is really common to most GANs - all the analysis relies on the premise that the discriminators can be setup as good estimators for the density ratios. We know that this is not always the case since everything comes from samples and if the capacities of each of the discriminators are not set appropriately then I would expect problems to occur - has this been explored by the authors? It would be no detriment to the work to include failure examples where the authors purposefully make use of inappropriate architectures for some of the discriminators to check for this? For example, there will be large imbalances in the number of training samples used for the different discriminators - how does this affect stability? Other notes: - Whilst I understand the point about independent marginals in 2.4 I*m not sure I see the motivation as clearly since it seems that the model is much more useful when there is dependent information but maybe there*s a use-case I*m not thinking of?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators\nGenerative adversarial networks (GANs) have shown great success in applications such as image generation and inpainting. However, they typically require large datasets, which are often not available, especially in the context of prediction tasks such as image segmentation that require labels. Therefore, methods such as the CycleGAN use more easily available unlabelled data, but do not offer a way to leverage additional labelled data for improved performance. To address this shortcoming, we show how to factorise the joint data distribution into a set of lower-dimensional distributions along with their dependencies. This allows splitting the discriminator in a GAN into multiple *sub-discriminators* that can be independently trained from incomplete observations. Their outputs can be combined to estimate the density ratio between the joint real and the generator distribution, which enables training generators as in the original GAN framework. We apply our method to image generation, image segmentation and audio source separation, and obtain improved performance over a standard GAN when additional incomplete training examples are available. For the Cityscapes segmentation task in particular, our method also improves accuracy by an absolute 13.6% over CycleGAN while using only 25 additional paired examples.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI found this paper very easy and clear to follow - the authors present, what I believe to be an elegant, approach to training a GAN in the presence of missing data or where many marginal samples might be available but very few complete (e.g. paired) samples. The approach proceeds by identifying that the joint distributions (true and approximate) can be factored so as to yield a number of different density ratios which can then be estimated by specific discriminators; in particular, these include the appropriate marginal density ratios and then corresponding overall correction factors. As a caveat to the review I should point out that while I am familiar with GANs, they are not my main area of expertise so this should be taken into consideration - apologies if there is literature I have missed. Experiments: The authors provide a number of illustrative experiments that demonstrate the efficacy of the approach across a number of tasks. There are many differing GAN models but due to the nature of the problem I don*t have a big issue with the majority of the comparisons being against a standard GAN since the tasks are suitably designed. For the paired MNIST experiment I found it hard to assess the qualitative results visually and am always concerned about the ad-hoc nature of Inception Distances - I find it difficult to attribute weight to them quantitatively since they are usually being used to assess things where they might suffer from a common error (e.g. they are both based on NNs). Also, I*m not fully on board with the dependency metric in (5) but then the authors also point out the same concerns. The other experiments I found more convincing. I appreciated having error bars on some of the plots to help assess significance - would it not be possible to put error bars on all plots? I found the additional extensions presented in the appendix to be interesting ideas as well and would be interested to see how the approach works with other GAN objectives as mentioned for future work. I am mostly very positive about this work - my main concern is really common to most GANs - all the analysis relies on the premise that the discriminators can be setup as good estimators for the density ratios. We know that this is not always the case since everything comes from samples and if the capacities of each of the discriminators are not set appropriately then I would expect problems to occur - has this been explored by the authors? It would be no detriment to the work to include failure examples where the authors purposefully make use of inappropriate architectures for some of the discriminators to check for this? For example, there will be large imbalances in the number of training samples used for the different discriminators - how does this affect stability? Other notes: - Whilst I understand the point about independent marginals in 2.4 I*m not sure I see the motivation as clearly since it seems that the model is much more useful when there is dependent information but maybe there*s a use-case I*m not thinking of?", "role": "assistant" } ]
[ "I found this paper very easy and clear to follow - the authors present, what I believe to be an elegant, approach to training a GAN in the presence of missing data or where many marginal samples might be available but very few complete (e.g.", "paired) samples.", "The approach proceeds by identifying that the joint distributions (true and approximate) can be factored so as to yield a number of different density ratios which can then be estimated by specific discriminators; in particular, these include the appropriate marginal density ratios and then corresponding overall correction factors.", "As a caveat to the review I should point out that while I am familiar with GANs, they are not my main area of expertise so this should be taken into consideration - apologies if there is literature I have missed.", "Experiments: The authors provide a number of illustrative experiments that demonstrate the efficacy of the approach across a number of tasks.", "There are many differing GAN models but due to the nature of the problem I don*t have a big issue with the majority of the comparisons being against a standard GAN since the tasks are suitably designed.", "For the paired MNIST experiment I found it hard to assess the qualitative results visually and am always concerned about the ad-hoc nature of Inception Distances - I find it difficult to attribute weight to them quantitatively since they are usually being used to assess things where they might suffer from a common error (e.g.", "they are both based on NNs).", "Also, I*m not fully on board with the dependency metric in (5) but then the authors also point out the same concerns.", "The other experiments I found more convincing.", "I appreciated having error bars on some of the plots to help assess significance - would it not be possible to put error bars on all plots?", "I found the additional extensions presented in the appendix to be interesting ideas as well and would be interested to see how the approach works with other GAN objectives as mentioned for future work.", "I am mostly very positive about this work - my main concern is really common to most GANs - all the analysis relies on the premise that the discriminators can be setup as good estimators for the density ratios.", "We know that this is not always the case since everything comes from samples and if the capacities of each of the discriminators are not set appropriately then I would expect problems to occur - has this been explored by the authors?", "It would be no detriment to the work to include failure examples where the authors purposefully make use of inappropriate architectures for some of the discriminators to check for this?", "For example, there will be large imbalances in the number of training samples used for the different discriminators - how does this affect stability?", "Other notes: - Whilst I understand the point about independent marginals in 2.4 I*m not sure I see the motivation as clearly since it seems that the model is much more useful when there is dependent information but maybe there*s a use-case I*m not thinking of?" ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 4, "materials_and_methods": 15, "praise": 5, "presentation_and_reporting": 0, "results_and_discussion": 6, "suggestion_and_solution": 3, "total": 17 }
0.235294
0.058824
0.235294
0.882353
0.294118
0
0.352941
0.176471
17
1
1
1
1
1
0
1
1
2.235294
1.982775
0.252519
Decentralized Deep Learning with Arbitrary Communication Compression
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over decentralized user devices, connected by a peer-to-peer network and (ii) in a datacenter.
This paper studies the convergence of CHOCO-SGD for nonconvex objectives and shows its linear speedup while the original paper of CHOCO-SGD only provides analysis for convex objectives. The momemtum version of CHOCO-SGD is also provided although no theoretical analysis is presented. Extensive empirical results are presented in this paper and the two use cases highlight some potential usage of the algorithm. However, there some concerns which could be addressed. First, the authors only provide analysis on CHOCO-SGD but the comparison with baselines are based on their momemtum versions. Moreover, some highly relevant baseline like DeepSqueeze are not cited and compared. Thus, the advantage of vanilla CHOCO-SGD over other alternatives is not convincing. Second, the cores of decentralized optimization include minimization of objective and consensus of the solution. However, no evaluation of the consensus is presented and this leads to the following point. Third, it seems the authors report the average performance over all nodes using their individual model. If this is the case, the reported perfromance and comparison are not convincing. Without consensus, different nodes can have individual minimizer. In this case, the obtained average loss can be even smaller than the optimal loss. Under current measurement, if we run SGD on each worker individually without any communication, we will still get pretty good performance but this does not achieve the goal of decentralized optimization. Further clarification on this is needed. Overall, I think the technical contribution of this paper is unclear and the evaluation is not convincing.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDecentralized Deep Learning with Arbitrary Communication Compression\nDecentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over decentralized user devices, connected by a peer-to-peer network and (ii) in a datacenter.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper studies the convergence of CHOCO-SGD for nonconvex objectives and shows its linear speedup while the original paper of CHOCO-SGD only provides analysis for convex objectives. The momemtum version of CHOCO-SGD is also provided although no theoretical analysis is presented. Extensive empirical results are presented in this paper and the two use cases highlight some potential usage of the algorithm. However, there some concerns which could be addressed. First, the authors only provide analysis on CHOCO-SGD but the comparison with baselines are based on their momemtum versions. Moreover, some highly relevant baseline like DeepSqueeze are not cited and compared. Thus, the advantage of vanilla CHOCO-SGD over other alternatives is not convincing. Second, the cores of decentralized optimization include minimization of objective and consensus of the solution. However, no evaluation of the consensus is presented and this leads to the following point. Third, it seems the authors report the average performance over all nodes using their individual model. If this is the case, the reported perfromance and comparison are not convincing. Without consensus, different nodes can have individual minimizer. In this case, the obtained average loss can be even smaller than the optimal loss. Under current measurement, if we run SGD on each worker individually without any communication, we will still get pretty good performance but this does not achieve the goal of decentralized optimization. Further clarification on this is needed. Overall, I think the technical contribution of this paper is unclear and the evaluation is not convincing.", "role": "assistant" } ]
[ "This paper studies the convergence of CHOCO-SGD for nonconvex objectives and shows its linear speedup while the original paper of CHOCO-SGD only provides analysis for convex objectives.", "The momemtum version of CHOCO-SGD is also provided although no theoretical analysis is presented.", "Extensive empirical results are presented in this paper and the two use cases highlight some potential usage of the algorithm.", "However, there some concerns which could be addressed.", "First, the authors only provide analysis on CHOCO-SGD but the comparison with baselines are based on their momemtum versions.", "Moreover, some highly relevant baseline like DeepSqueeze are not cited and compared.", "Thus, the advantage of vanilla CHOCO-SGD over other alternatives is not convincing.", "Second, the cores of decentralized optimization include minimization of objective and consensus of the solution.", "However, no evaluation of the consensus is presented and this leads to the following point.", "Third, it seems the authors report the average performance over all nodes using their individual model.", "If this is the case, the reported perfromance and comparison are not convincing.", "Without consensus, different nodes can have individual minimizer.", "In this case, the obtained average loss can be even smaller than the optimal loss.", "Under current measurement, if we run SGD on each worker individually without any communication, we will still get pretty good performance but this does not achieve the goal of decentralized optimization.", "Further clarification on this is needed.", "Overall, I think the technical contribution of this paper is unclear and the evaluation is not convincing." ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 14, "praise": 0, "presentation_and_reporting": 3, "results_and_discussion": 6, "suggestion_and_solution": 3, "total": 16 }
0.4375
0
0.0625
0.875
0
0.1875
0.375
0.1875
16
1
0
1
1
0
1
1
1
2.125
1.982958
0.142042
Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control
Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states.
This paper presents an adversarial attack for perturbing the actions or observations of an agent acting near-optimally in an MDP so that the policy performs poorly. I think understanding the sensitivity of a policy to slight perturbations in the actions it takes or the observations that it receives is important for having robust learned policies and controllers. This paper presents an empirical step in the direction of showing that such attacks are possible, but in the context of the other adversarial attacks that are possible, this is not surprising alone and would be much stronger with other contributions. I think an exciting direction of new work could be to continue formalizing these vulnerabilities and looking at ways of adding robustness across many other domains.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nToward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control\nDeep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents an adversarial attack for perturbing the actions or observations of an agent acting near-optimally in an MDP so that the policy performs poorly. I think understanding the sensitivity of a policy to slight perturbations in the actions it takes or the observations that it receives is important for having robust learned policies and controllers. This paper presents an empirical step in the direction of showing that such attacks are possible, but in the context of the other adversarial attacks that are possible, this is not surprising alone and would be much stronger with other contributions. I think an exciting direction of new work could be to continue formalizing these vulnerabilities and looking at ways of adding robustness across many other domains.", "role": "assistant" } ]
[ "This paper presents an adversarial attack for perturbing the actions or observations of an agent acting near-optimally in an MDP so that the policy performs poorly.", "I think understanding the sensitivity of a policy to slight perturbations in the actions it takes or the observations that it receives is important for having robust learned policies and controllers.", "This paper presents an empirical step in the direction of showing that such attacks are possible, but in the context of the other adversarial attacks that are possible, this is not surprising alone and would be much stronger with other contributions.", "I think an exciting direction of new work could be to continue formalizing these vulnerabilities and looking at ways of adding robustness across many other domains." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 2, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 4 }
0.25
0
0.75
0.5
0.25
0
0.25
0.25
4
1
0
1
1
1
0
1
1
2.25
0.971623
1.278377
Learning-Augmented Data Stream Algorithms
The data stream model is a fundamental model for processing massive data sets with limited memory and fast processing time. Recently Hsu et al. (2019) incorporated machine learning techniques into the data stream model in order to learn relevant patterns in the input data. Such techniques were encapsulated by training an oracle to predict item frequencies in the streaming model. In this paper we explore the full power of such an oracle, showing that it can be applied to a wide array of problems in data streams, sometimes resulting in the first optimal bounds for such problems. Namely, we apply the oracle to counting distinct elements on the difference of streams, estimating frequency moments, estimating cascaded aggregates, and estimating moments of geometric data streams. For the distinct elements problem, we obtain the first memory-optimal algorithms. For estimating the -th frequency moment for we obtain the first algorithms with optimal update time. For estimating the -the frequency moment for we obtain a quadratic saving in memory. We empirically validate our results, demonstrating also our improvements in practice.
The paper talks about calculating various statistics over data streams. This is a very popular topic and is very relevant in big data analysis. A lot of work has been done in this general area and on the problems that are discussed in the paper. The new idea in the paper is better streaming algorithms under the assumption that there is a “heavy hitters” oracle that returns data items that have a lot of representation in the stream. The authors give provably better algorithms for the distinct elements problem, F_p moment problem (p > 2), and some more problems. These are important problems in streaming data analysis. They improve the space bounds and interestingly in some cases the bounds are better than what is possible without the oracle assumption. This also shows the power of such an oracle. There are experimental results to demonstrate the efficiency of the algorithms. At a high level the work seems good and interesting for a large audience interested in streaming data analysis. I have not gone over the proofs in detail (much of which is in the appendix). - Even though oracle results are interesting, to make it practical it may make sense to talk about a more realistic, weaker oracle where some of the queries may be incorrect. - It may even make sense to minimise the number of oracle calls which can be thought of as a resource and discuss the relationship between number of oracle calls and other resources such as space.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning-Augmented Data Stream Algorithms\nThe data stream model is a fundamental model for processing massive data sets with limited memory and fast processing time. Recently Hsu et al. (2019) incorporated machine learning techniques into the data stream model in order to learn relevant patterns in the input data. Such techniques were encapsulated by training an oracle to predict item frequencies in the streaming model. In this paper we explore the full power of such an oracle, showing that it can be applied to a wide array of problems in data streams, sometimes resulting in the first optimal bounds for such problems. Namely, we apply the oracle to counting distinct elements on the difference of streams, estimating frequency moments, estimating cascaded aggregates, and estimating moments of geometric data streams. For the distinct elements problem, we obtain the first memory-optimal algorithms. For estimating the -th frequency moment for we obtain the first algorithms with optimal update time. For estimating the -the frequency moment for we obtain a quadratic saving in memory. We empirically validate our results, demonstrating also our improvements in practice.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper talks about calculating various statistics over data streams. This is a very popular topic and is very relevant in big data analysis. A lot of work has been done in this general area and on the problems that are discussed in the paper. The new idea in the paper is better streaming algorithms under the assumption that there is a “heavy hitters” oracle that returns data items that have a lot of representation in the stream. The authors give provably better algorithms for the distinct elements problem, F_p moment problem (p > 2), and some more problems. These are important problems in streaming data analysis. They improve the space bounds and interestingly in some cases the bounds are better than what is possible without the oracle assumption. This also shows the power of such an oracle. There are experimental results to demonstrate the efficiency of the algorithms. At a high level the work seems good and interesting for a large audience interested in streaming data analysis. I have not gone over the proofs in detail (much of which is in the appendix). - Even though oracle results are interesting, to make it practical it may make sense to talk about a more realistic, weaker oracle where some of the queries may be incorrect. - It may even make sense to minimise the number of oracle calls which can be thought of as a resource and discuss the relationship between number of oracle calls and other resources such as space.", "role": "assistant" } ]
[ "The paper talks about calculating various statistics over data streams.", "This is a very popular topic and is very relevant in big data analysis.", "A lot of work has been done in this general area and on the problems that are discussed in the paper.", "The new idea in the paper is better streaming algorithms under the assumption that there is a “heavy hitters” oracle that returns data items that have a lot of representation in the stream.", "The authors give provably better algorithms for the distinct elements problem, F_p moment problem (p > 2), and some more problems.", "These are important problems in streaming data analysis.", "They improve the space bounds and interestingly in some cases the bounds are better than what is possible without the oracle assumption.", "This also shows the power of such an oracle.", "There are experimental results to demonstrate the efficiency of the algorithms.", "At a high level the work seems good and interesting for a large audience interested in streaming data analysis.", "I have not gone over the proofs in detail (much of which is in the appendix).", "- Even though oracle results are interesting, to make it practical it may make sense to talk about a more realistic, weaker oracle where some of the queries may be incorrect.", "- It may even make sense to minimise the number of oracle calls which can be thought of as a resource and discuss the relationship between number of oracle calls and other resources such as space." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 7, "materials_and_methods": 7, "praise": 5, "presentation_and_reporting": 1, "results_and_discussion": 4, "suggestion_and_solution": 2, "total": 13 }
0.153846
0
0.538462
0.538462
0.384615
0.076923
0.307692
0.153846
13
1
0
1
1
1
1
1
1
2.153846
2.153846
0
Meta-learning curiosity algorithms
Exploration is a key component of successful reinforcement learning, but optimal approaches are computationally intractable, so researchers have focused on hand-designing mechanisms based on exploration bonuses and intrinsic reward, some inspired by curious behavior in natural systems. In this work, we propose a strategy for encoding curiosity algorithms as programs in a domain-specific language and searching, during a meta-learning phase, for algorithms that enable RL agents to perform well in new domains. Our rich language of programs, which can combine neural networks with other building blocks including nearest-neighbor modules and can choose its own loss functions, enables the expression of highly generalizable programs that perform well in domains as disparate as grid navigation with image input, acrobot, lunar lander, ant and hopper. To make this approach feasible, we develop several pruning techniques, including learning to predict a program*s success based on its syntactic properties. We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well.
Instead of hand-designing exploration bonuses and intrinsic reward, the paper proposes to view the curiosity algorithms as programs described with domain-specific language (DSL), then search programs which allows RL agents to optimize the environment reward combined with the curiosity reward generated by the program. The search produces programs similar to algorithms proposed in literature and also some strategies which generalize well. In order to make the search tractable, the authors develop a few criteria: 1) evaluate the program on relatively simple and short-horizon domains; 2) predict the performance of the program and rank them; 3) stop the agents if the learning curves do not look good after long enough of training steps. This is a very interesting idea, and it*s partially inspired by the architecture search line of research. It would be great if the authors could provide more information about *predicting algorithm performance* section. I find it very interesting and exciting to see programs like Figure 3. They are quite interpretable. The results on Table 1 are really exciting, they show that the searched programs could be generalized into other unseen tasks.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMeta-learning curiosity algorithms\nExploration is a key component of successful reinforcement learning, but optimal approaches are computationally intractable, so researchers have focused on hand-designing mechanisms based on exploration bonuses and intrinsic reward, some inspired by curious behavior in natural systems. In this work, we propose a strategy for encoding curiosity algorithms as programs in a domain-specific language and searching, during a meta-learning phase, for algorithms that enable RL agents to perform well in new domains. Our rich language of programs, which can combine neural networks with other building blocks including nearest-neighbor modules and can choose its own loss functions, enables the expression of highly generalizable programs that perform well in domains as disparate as grid navigation with image input, acrobot, lunar lander, ant and hopper. To make this approach feasible, we develop several pruning techniques, including learning to predict a program*s success based on its syntactic properties. We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nInstead of hand-designing exploration bonuses and intrinsic reward, the paper proposes to view the curiosity algorithms as programs described with domain-specific language (DSL), then search programs which allows RL agents to optimize the environment reward combined with the curiosity reward generated by the program. The search produces programs similar to algorithms proposed in literature and also some strategies which generalize well. In order to make the search tractable, the authors develop a few criteria: 1) evaluate the program on relatively simple and short-horizon domains; 2) predict the performance of the program and rank them; 3) stop the agents if the learning curves do not look good after long enough of training steps. This is a very interesting idea, and it*s partially inspired by the architecture search line of research. It would be great if the authors could provide more information about *predicting algorithm performance* section. I find it very interesting and exciting to see programs like Figure 3. They are quite interpretable. The results on Table 1 are really exciting, they show that the searched programs could be generalized into other unseen tasks.", "role": "assistant" } ]
[ "Instead of hand-designing exploration bonuses and intrinsic reward, the paper proposes to view the curiosity algorithms as programs described with domain-specific language (DSL), then search programs which allows RL agents to optimize the environment reward combined with the curiosity reward generated by the program.", "The search produces programs similar to algorithms proposed in literature and also some strategies which generalize well.", "In order to make the search tractable, the authors develop a few criteria: 1) evaluate the program on relatively simple and short-horizon domains; 2) predict the performance of the program and rank them; 3) stop the agents if the learning curves do not look good after long enough of training steps.", "This is a very interesting idea, and it*s partially inspired by the architecture search line of research.", "It would be great if the authors could provide more information about *predicting algorithm performance* section.", "I find it very interesting and exciting to see programs like Figure 3.", "They are quite interpretable.", "The results on Table 1 are really exciting, they show that the searched programs could be generalized into other unseen tasks." ]
{ "criticism": 0, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 4, "presentation_and_reporting": 3, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 8 }
0
0.125
0.25
0.5
0.5
0.375
0.25
0.25
8
0
1
1
1
1
1
1
1
2.25
1.855439
0.394561
Empirical Bayes Transductive Meta-Learning with Synthetic Gradients
We propose a meta-learning approach that learns from multiple tasks in a transductive setting, by leveraging unlabeled information in the query set to learn a more powerful meta-model. To develop our framework we revisit the empirical Bayes formulation for multi-task learning. The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior of each task. We derive a novel amortized variational inference that couples all the variational posteriors into a meta-model, which consists of a synthetic gradient network and an initialization network. The combination of local KL divergences and synthetic gradient network allows for backpropagating information from unlabeled data, thereby enabling transduction. Our results on the Mini-ImageNet and CIFAR-FS benchmarks for episodic few-shot classification significantly outperform previous state-of-the-art methods.
The authors argue for the importance of transduction in few-shot learning approaches, and augment the empirical Bayes objective established in previous work (estimating the hyperpior in a frequentist fashion) so as to take advantage of the unlabelled test data. Since the label is, by definition, unknown, a synthetic gradient is instead learned to parameterize the gradient of the variational posterior and tailor it for the transductive setting. The authors provide an analysis of the generalization ability of EB and term their method _synthetic information bottleneck_ (SIB) owing to parallels between its objective of that of the information bottleneck. SIB is tested on two standard few-shot image benchmarks in CIFAR-FS and MiniImageNet, exhibiting impressive performance and outperforming, in some cases by some margin, the baseline methods, in the 1- and 5-shot settings alike, in addition to a synthetic dataset. The paper is technically sound and, for the most part, well-written, with the authors* motivations and explanation of the method conceived quite straightforwardly. The basic premise of using an estimated gradient to fashion an inductive few-shot learning algorithm into a transductive one is a natural and evidently potent one. The paper does, however, at times feel to be disjointed and, to an extent, lacking in focus. The respective advantages of EB and the repurposing of synthetic gradients to enable the transductive approach are clear to me, yet while they might indeed be obviously complementary, what is not obvious is the necessity of the pairing: it seems there is nothing prohibiting the substitution of the gradient for a learned surrogated just as well under a deterministic meta-initialization framework. As such, despite sporting impressive results on the image datasets, I am not convinced about how truly novel the method is when viewed as a whole. On a similar note, while the theoretical analysis provided in section 4 was not unappreciated, and indeed it was interesting to see such a connection between EB with information theory rigorously made, it does feel a little out of place within the main text, especially since it is not specific to the transductive setting considered, nor even to the meta-learning setting more broadly. Rather, more experiments, per Appendix C, highlighting the importance of transduction and therein the synthetic gradients and its formulation would be welcome. Indeed, it is stated that an additional loss for training the synthetic gradient network to mimic the true gradient is unnecessary; while I agree with this conclusion, I likewise do not think it would hurt to explore use of the more explicit formulation. Considering the authors argue specifically for the importance of transduction in the zero-shot learning regime, I think it would be reasonable to expect experiments substantiating this, and the strength of their method in this regard, on non-synthetic datasets. As far as the toy problem is concerned, I am slightly confused as to the choice of baseline, both in the regard to its training procedure and as to why this was deemed more suitable than one purposed for few-shot learning, so that we might go beyond simple verification to getting some initial sense for the performance of SIB. Moreover, it is not clear from the description as to how is implemented here. As it stands, Section 5, for me, offers little in the way of valuable insights. The experiments section on the whole, results aside, feels somewhat rushed; the synthetic gradients being a potential limiting factor for instance feels *tacked on* and seems to warrant more than just a passing comment. Minor errors - Page 7: the *to* in *let to be a Gaussian* is extraneous - Page 8: *split* not *splitted*. - Further down on the same page, *scale* in *scale each feature dimension* should be singular and Drop* is misspelled as *dropp*. - Page 9: *We report results **with using** learning rate...* - _Equation 17_ includes the indicator function but is not defined within the context. EDIT: changed score
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEmpirical Bayes Transductive Meta-Learning with Synthetic Gradients\nWe propose a meta-learning approach that learns from multiple tasks in a transductive setting, by leveraging unlabeled information in the query set to learn a more powerful meta-model. To develop our framework we revisit the empirical Bayes formulation for multi-task learning. The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior of each task. We derive a novel amortized variational inference that couples all the variational posteriors into a meta-model, which consists of a synthetic gradient network and an initialization network. The combination of local KL divergences and synthetic gradient network allows for backpropagating information from unlabeled data, thereby enabling transduction. Our results on the Mini-ImageNet and CIFAR-FS benchmarks for episodic few-shot classification significantly outperform previous state-of-the-art methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors argue for the importance of transduction in few-shot learning approaches, and augment the empirical Bayes objective established in previous work (estimating the hyperpior in a frequentist fashion) so as to take advantage of the unlabelled test data. Since the label is, by definition, unknown, a synthetic gradient is instead learned to parameterize the gradient of the variational posterior and tailor it for the transductive setting. The authors provide an analysis of the generalization ability of EB and term their method _synthetic information bottleneck_ (SIB) owing to parallels between its objective of that of the information bottleneck. SIB is tested on two standard few-shot image benchmarks in CIFAR-FS and MiniImageNet, exhibiting impressive performance and outperforming, in some cases by some margin, the baseline methods, in the 1- and 5-shot settings alike, in addition to a synthetic dataset. The paper is technically sound and, for the most part, well-written, with the authors* motivations and explanation of the method conceived quite straightforwardly. The basic premise of using an estimated gradient to fashion an inductive few-shot learning algorithm into a transductive one is a natural and evidently potent one. The paper does, however, at times feel to be disjointed and, to an extent, lacking in focus. The respective advantages of EB and the repurposing of synthetic gradients to enable the transductive approach are clear to me, yet while they might indeed be obviously complementary, what is not obvious is the necessity of the pairing: it seems there is nothing prohibiting the substitution of the gradient for a learned surrogated just as well under a deterministic meta-initialization framework. As such, despite sporting impressive results on the image datasets, I am not convinced about how truly novel the method is when viewed as a whole. On a similar note, while the theoretical analysis provided in section 4 was not unappreciated, and indeed it was interesting to see such a connection between EB with information theory rigorously made, it does feel a little out of place within the main text, especially since it is not specific to the transductive setting considered, nor even to the meta-learning setting more broadly. Rather, more experiments, per Appendix C, highlighting the importance of transduction and therein the synthetic gradients and its formulation would be welcome. Indeed, it is stated that an additional loss for training the synthetic gradient network to mimic the true gradient is unnecessary; while I agree with this conclusion, I likewise do not think it would hurt to explore use of the more explicit formulation. Considering the authors argue specifically for the importance of transduction in the zero-shot learning regime, I think it would be reasonable to expect experiments substantiating this, and the strength of their method in this regard, on non-synthetic datasets. As far as the toy problem is concerned, I am slightly confused as to the choice of baseline, both in the regard to its training procedure and as to why this was deemed more suitable than one purposed for few-shot learning, so that we might go beyond simple verification to getting some initial sense for the performance of SIB. Moreover, it is not clear from the description as to how is implemented here. As it stands, Section 5, for me, offers little in the way of valuable insights. The experiments section on the whole, results aside, feels somewhat rushed; the synthetic gradients being a potential limiting factor for instance feels *tacked on* and seems to warrant more than just a passing comment. Minor errors - Page 7: the *to* in *let to be a Gaussian* is extraneous - Page 8: *split* not *splitted*. - Further down on the same page, *scale* in *scale each feature dimension* should be singular and Drop* is misspelled as *dropp*. - Page 9: *We report results **with using** learning rate...* - _Equation 17_ includes the indicator function but is not defined within the context. EDIT: changed score", "role": "assistant" } ]
[ "The authors argue for the importance of transduction in few-shot learning approaches, and augment the empirical Bayes objective established in previous work (estimating the hyperpior in a frequentist fashion) so as to take advantage of the unlabelled test data.", "Since the label is, by definition, unknown, a synthetic gradient is instead learned to parameterize the gradient of the variational posterior and tailor it for the transductive setting.", "The authors provide an analysis of the generalization ability of EB and term their method _synthetic information bottleneck_ (SIB) owing to parallels between its objective of that of the information bottleneck.", "SIB is tested on two standard few-shot image benchmarks in CIFAR-FS and MiniImageNet, exhibiting impressive performance and outperforming, in some cases by some margin, the baseline methods, in the 1- and 5-shot settings alike, in addition to a synthetic dataset.", "The paper is technically sound and, for the most part, well-written, with the authors* motivations and explanation of the method conceived quite straightforwardly.", "The basic premise of using an estimated gradient to fashion an inductive few-shot learning algorithm into a transductive one is a natural and evidently potent one.", "The paper does, however, at times feel to be disjointed and, to an extent, lacking in focus.", "The respective advantages of EB and the repurposing of synthetic gradients to enable the transductive approach are clear to me, yet while they might indeed be obviously complementary, what is not obvious is the necessity of the pairing: it seems there is nothing prohibiting the substitution of the gradient for a learned surrogated just as well under a deterministic meta-initialization framework.", "As such, despite sporting impressive results on the image datasets, I am not convinced about how truly novel the method is when viewed as a whole.", "On a similar note, while the theoretical analysis provided in section 4 was not unappreciated, and indeed it was interesting to see such a connection between EB with information theory rigorously made, it does feel a little out of place within the main text, especially since it is not specific to the transductive setting considered, nor even to the meta-learning setting more broadly.", "Rather, more experiments, per Appendix C, highlighting the importance of transduction and therein the synthetic gradients and its formulation would be welcome.", "Indeed, it is stated that an additional loss for training the synthetic gradient network to mimic the true gradient is unnecessary; while I agree with this conclusion, I likewise do not think it would hurt to explore use of the more explicit formulation.", "Considering the authors argue specifically for the importance of transduction in the zero-shot learning regime, I think it would be reasonable to expect experiments substantiating this, and the strength of their method in this regard, on non-synthetic datasets.", "As far as the toy problem is concerned, I am slightly confused as to the choice of baseline, both in the regard to its training procedure and as to why this was deemed more suitable than one purposed for few-shot learning, so that we might go beyond simple verification to getting some initial sense for the performance of SIB.", "Moreover, it is not clear from the description as to how is implemented here.", "As it stands, Section 5, for me, offers little in the way of valuable insights.", "The experiments section on the whole, results aside, feels somewhat rushed; the synthetic gradients being a potential limiting factor for instance feels *tacked on* and seems to warrant more than just a passing comment.", "Minor errors - Page 7: the *to* in *let to be a Gaussian* is extraneous - Page 8: *split* not *splitted*.", "- Further down on the same page, *scale* in *scale each feature dimension* should be singular and Drop* is misspelled as *dropp*.", "- Page 9: *We report results **with using** learning rate...* - _Equation 17_ includes the indicator function but is not defined within the context.", "EDIT: changed score" ]
{ "criticism": 12, "example": 3, "importance_and_relevance": 5, "materials_and_methods": 16, "praise": 4, "presentation_and_reporting": 5, "results_and_discussion": 6, "suggestion_and_solution": 5, "total": 21 }
0.571429
0.142857
0.238095
0.761905
0.190476
0.238095
0.285714
0.238095
21
1
1
1
1
1
1
1
1
2.666667
1.656591
1.010076
Understanding the Limitations of Conditional Generative Models
Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts. They are a natural choice to solve discriminative tasks in a robust manner as they jointly optimize for predictive performance and accurate modeling of the input distribution. In this work, we investigate robust classification with likelihood-based generative models from a theoretical and practical perspective to investigate if they can deliver on their promises. Our analysis focuses on a spectrum of robustness properties: (1) Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled in-distribution inputs. Our theoretical result reveals that it is impossible to guarantee detectability of adversarially-perturbed inputs even for near-optimal generative classifiers. Experimentally, we find that while we are able to train robust models for MNIST, robustness completely breaks down on CIFAR10. We relate this failure to various undesirable model properties that can be traced to the maximum likelihood training objective. Despite being a common choice in the literature, our results indicate that likelihood-based conditional generative models may are surprisingly ineffective for robust classification.
paper summary: The authors claim that likelihood based generative models are not as robust to noise as general consensus claims them to be. To prove this authors make use of adversarial, ambiguous and incorrectly labeled in distribution inputs. Authors address issues regarding robustness in near perfect conditional generative models as well as assess the robustness of the likelihood objective. Pros of the paper: 1) Authors make well motivated arguments about how a near perfect generative model is also susceptible to attacks by providing examples that are adversarial, and have high likelihood and yet are incorrectly labeled. 2) They also demonstrate how class conditional generative models have poor discriminative power. Cons: 1) The experiments section is written very poorly. This section relies heavily on the supplement making it hard to read due to the constant back and forth between the results and details of the experiments. 2) Experiments seem largely limited. Comparisons on image data sets such as MNIST and CIFAR10 alone are not convincing enough to establish generalizability of the proposed theory. For example, the hypothesis could completely fail on text based generative models.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding the Limitations of Conditional Generative Models\nClass-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts. They are a natural choice to solve discriminative tasks in a robust manner as they jointly optimize for predictive performance and accurate modeling of the input distribution. In this work, we investigate robust classification with likelihood-based generative models from a theoretical and practical perspective to investigate if they can deliver on their promises. Our analysis focuses on a spectrum of robustness properties: (1) Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled in-distribution inputs. Our theoretical result reveals that it is impossible to guarantee detectability of adversarially-perturbed inputs even for near-optimal generative classifiers. Experimentally, we find that while we are able to train robust models for MNIST, robustness completely breaks down on CIFAR10. We relate this failure to various undesirable model properties that can be traced to the maximum likelihood training objective. Despite being a common choice in the literature, our results indicate that likelihood-based conditional generative models may are surprisingly ineffective for robust classification.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\npaper summary: The authors claim that likelihood based generative models are not as robust to noise as general consensus claims them to be. To prove this authors make use of adversarial, ambiguous and incorrectly labeled in distribution inputs. Authors address issues regarding robustness in near perfect conditional generative models as well as assess the robustness of the likelihood objective. Pros of the paper: 1) Authors make well motivated arguments about how a near perfect generative model is also susceptible to attacks by providing examples that are adversarial, and have high likelihood and yet are incorrectly labeled. 2) They also demonstrate how class conditional generative models have poor discriminative power. Cons: 1) The experiments section is written very poorly. This section relies heavily on the supplement making it hard to read due to the constant back and forth between the results and details of the experiments. 2) Experiments seem largely limited. Comparisons on image data sets such as MNIST and CIFAR10 alone are not convincing enough to establish generalizability of the proposed theory. For example, the hypothesis could completely fail on text based generative models.", "role": "assistant" } ]
[ "paper summary: The authors claim that likelihood based generative models are not as robust to noise as general consensus claims them to be.", "To prove this authors make use of adversarial, ambiguous and incorrectly labeled in distribution inputs.", "Authors address issues regarding robustness in near perfect conditional generative models as well as assess the robustness of the likelihood objective.", "Pros of the paper: 1) Authors make well motivated arguments about how a near perfect generative model is also susceptible to attacks by providing examples that are adversarial, and have high likelihood and yet are incorrectly labeled.", "2) They also demonstrate how class conditional generative models have poor discriminative power.", "Cons: 1) The experiments section is written very poorly.", "This section relies heavily on the supplement making it hard to read due to the constant back and forth between the results and details of the experiments.", "2) Experiments seem largely limited.", "Comparisons on image data sets such as MNIST and CIFAR10 alone are not convincing enough to establish generalizability of the proposed theory.", "For example, the hypothesis could completely fail on text based generative models." ]
{ "criticism": 6, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 10, "praise": 0, "presentation_and_reporting": 5, "results_and_discussion": 4, "suggestion_and_solution": 0, "total": 10 }
0.6
0
0
1
0
0.5
0.4
0
10
1
0
0
1
0
1
1
0
2.5
2.357958
0.142042
RaPP: Novelty Detection with Reconstruction along Projection Pathway
We propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder. Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces. We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input. In order to aggregate the hidden space activation values, we propose two metrics, which enhance the novelty detection performance. Through extensive experiments using diverse datasets, we validate that RaPP improves novelty detection performances of autoencoder-based approaches. Besides, we show that RaPP outperforms recent novelty detection methods evaluated on popular benchmarks.
This paper proposes a novelty detection method by utilizing latent variables in auto-encoder. Based on this, this paper proposes two metrics to quantifying the novelty of the input. Their main contribution is the NAP metric based on SVD. Their method is empirically demonstrated on several benchmark datasets, and they compare their proposed metrics with other competing methods using AUROC and experiments results are encouraging. The metrics proposed in this paper are intuitive and interesting. The experiments shown in Table2 is very convincing, and it could be better to extend Table3 to include other datasets (STL,OTTO, etc. )
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRaPP: Novelty Detection with Reconstruction along Projection Pathway\nWe propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder. Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces. We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input. In order to aggregate the hidden space activation values, we propose two metrics, which enhance the novelty detection performance. Through extensive experiments using diverse datasets, we validate that RaPP improves novelty detection performances of autoencoder-based approaches. Besides, we show that RaPP outperforms recent novelty detection methods evaluated on popular benchmarks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a novelty detection method by utilizing latent variables in auto-encoder. Based on this, this paper proposes two metrics to quantifying the novelty of the input. Their main contribution is the NAP metric based on SVD. Their method is empirically demonstrated on several benchmark datasets, and they compare their proposed metrics with other competing methods using AUROC and experiments results are encouraging. The metrics proposed in this paper are intuitive and interesting. The experiments shown in Table2 is very convincing, and it could be better to extend Table3 to include other datasets (STL,OTTO, etc. )", "role": "assistant" } ]
[ "This paper proposes a novelty detection method by utilizing latent variables in auto-encoder.", "Based on this, this paper proposes two metrics to quantifying the novelty of the input.", "Their main contribution is the NAP metric based on SVD.", "Their method is empirically demonstrated on several benchmark datasets, and they compare their proposed metrics with other competing methods using AUROC and experiments results are encouraging.", "The metrics proposed in this paper are intuitive and interesting.", "The experiments shown in Table2 is very convincing, and it could be better to extend Table3 to include other datasets (STL,OTTO, etc. )" ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 6 }
0
0
0.333333
1
0.5
0.166667
0.166667
0.166667
6
0
0
1
1
1
1
1
1
2.333333
1.559994
0.773339
Permutation Equivariant Models for Compositional Generalization in Language
Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance. Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models. Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.
This work focuses on learning equivariant representations and functions over input/output words for the purposes of SCAN task. Basically, the focus is on local equivariances (over vocabulary) such that the effect of replacing and action verb like RUN in the input with the verb JUMP causes a similar change in the output. However, effects requiring global equivariances like learning relationship between *twice* and *thrice*, or learning relationships between different kinds of conjunctions are not handled in this work. For learning equivariant functions over vocabulary, group convolutions are used at each step over vocabulary items in both the sequence encoder and decoder. The results on SCAN task are impressive for verb replacement based experiments and improve over other relevant baselines. Also, improvement is shown on another word replacement task (*around right*), which requires learning corresponding substitutions in output based on the word changes in the input. As expected, for experiments that require global equivariances or no equivariance (simple, length), the difference ion performance is not very pronounced over other baselines. While this paper does show that modelling effects of word substitution can be handled by the locally equivariant functions, it still cannot account for more complex generalization phenomena which are likely to be much more prevalent especially for domains dealing with natural language that are other than SCAN. Therefor, I think the applicability of the proposed equivariant architectures is rather limited if interesting.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPermutation Equivariant Models for Compositional Generalization in Language\nHumans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance. Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models. Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work focuses on learning equivariant representations and functions over input/output words for the purposes of SCAN task. Basically, the focus is on local equivariances (over vocabulary) such that the effect of replacing and action verb like RUN in the input with the verb JUMP causes a similar change in the output. However, effects requiring global equivariances like learning relationship between *twice* and *thrice*, or learning relationships between different kinds of conjunctions are not handled in this work. For learning equivariant functions over vocabulary, group convolutions are used at each step over vocabulary items in both the sequence encoder and decoder. The results on SCAN task are impressive for verb replacement based experiments and improve over other relevant baselines. Also, improvement is shown on another word replacement task (*around right*), which requires learning corresponding substitutions in output based on the word changes in the input. As expected, for experiments that require global equivariances or no equivariance (simple, length), the difference ion performance is not very pronounced over other baselines. While this paper does show that modelling effects of word substitution can be handled by the locally equivariant functions, it still cannot account for more complex generalization phenomena which are likely to be much more prevalent especially for domains dealing with natural language that are other than SCAN. Therefor, I think the applicability of the proposed equivariant architectures is rather limited if interesting.", "role": "assistant" } ]
[ "This work focuses on learning equivariant representations and functions over input/output words for the purposes of SCAN task.", "Basically, the focus is on local equivariances (over vocabulary) such that the effect of replacing and action verb like RUN in the input with the verb JUMP causes a similar change in the output.", "However, effects requiring global equivariances like learning relationship between *twice* and *thrice*, or learning relationships between different kinds of conjunctions are not handled in this work.", "For learning equivariant functions over vocabulary, group convolutions are used at each step over vocabulary items in both the sequence encoder and decoder.", "The results on SCAN task are impressive for verb replacement based experiments and improve over other relevant baselines.", "Also, improvement is shown on another word replacement task (*around right*), which requires learning corresponding substitutions in output based on the word changes in the input.", "As expected, for experiments that require global equivariances or no equivariance (simple, length), the difference ion performance is not very pronounced over other baselines.", "While this paper does show that modelling effects of word substitution can be handled by the locally equivariant functions, it still cannot account for more complex generalization phenomena which are likely to be much more prevalent especially for domains dealing with natural language that are other than SCAN.", "Therefor, I think the applicability of the proposed equivariant architectures is rather limited if interesting." ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 5, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 4, "suggestion_and_solution": 1, "total": 9 }
0.444444
0.111111
0.111111
0.555556
0.111111
0.333333
0.444444
0.111111
9
1
1
1
1
1
1
1
1
2.222222
1.969703
0.252519
Phase Transitions for the Information Bottleneck in Representation Learning
In the Information Bottleneck (IB) (Tishby et al., 2000), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what’s their relationship with the dataset and the learned representation? In this paper, we set out to answer this question by studying multiple phase transitions in the IB objective: IB?[p(z|x)] = I(X;Z) ? ?I(Y ;Z), where sudden jumps of dI(Y ;Z)/d? and prediction accuracy are observed with increasing ?. We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes. Using second-order calculus of variations, we derive a formula that provides the condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models. We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum (nonlinear) correlation between X and Y orthogonal to the learned representation, in close analogy with canonical-correlation analysis (CCA) in linear settings. Based on the theory, we present an algorithm for discovering phase transition points. Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10 experiments.
This paper studies the phase transition problem in the information bottleneck (IB) objective and derives a formula for IB phase transitions. Based on the theory developed in the paper, an algorithm is developed to find phase transition points. The interesting observation is phase transition can correspond to learning new class and this paper conjectures in IB for classification, the number of phase transitions is at most C-1, C the number of classes. This observation deserves to be further explored and may be a key to a deeper understanding of neural networks. The theory developed connects the IB objective, dataset and the representation, and thorough proofs are given. The experiments matches the theoretical findings.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPhase Transitions for the Information Bottleneck in Representation Learning\nIn the Information Bottleneck (IB) (Tishby et al., 2000), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what’s their relationship with the dataset and the learned representation? In this paper, we set out to answer this question by studying multiple phase transitions in the IB objective: IB?[p(z|x)] = I(X;Z) ? ?I(Y ;Z), where sudden jumps of dI(Y ;Z)/d? and prediction accuracy are observed with increasing ?. We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes. Using second-order calculus of variations, we derive a formula that provides the condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models. We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum (nonlinear) correlation between X and Y orthogonal to the learned representation, in close analogy with canonical-correlation analysis (CCA) in linear settings. Based on the theory, we present an algorithm for discovering phase transition points. Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10 experiments.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper studies the phase transition problem in the information bottleneck (IB) objective and derives a formula for IB phase transitions. Based on the theory developed in the paper, an algorithm is developed to find phase transition points. The interesting observation is phase transition can correspond to learning new class and this paper conjectures in IB for classification, the number of phase transitions is at most C-1, C the number of classes. This observation deserves to be further explored and may be a key to a deeper understanding of neural networks. The theory developed connects the IB objective, dataset and the representation, and thorough proofs are given. The experiments matches the theoretical findings.", "role": "assistant" } ]
[ "This paper studies the phase transition problem in the information bottleneck (IB) objective and derives a formula for IB phase transitions.", "Based on the theory developed in the paper, an algorithm is developed to find phase transition points.", "The interesting observation is phase transition can correspond to learning new class and this paper conjectures in IB for classification, the number of phase transitions is at most C-1, C the number of classes.", "This observation deserves to be further explored and may be a key to a deeper understanding of neural networks.", "The theory developed connects the IB objective, dataset and the representation, and thorough proofs are given.", "The experiments matches the theoretical findings." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 6 }
0
0
0.166667
0.833333
0.5
0
0.5
0.166667
6
0
0
1
1
1
0
1
1
2.166667
1.393328
0.773339
Variational Template Machine for Data-to-Text Generation
How to generate descriptions from structured data organized in tables? Existing approaches using neural encoder-decoder models often suffer from lacking diversity. We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations.Learning such templates is prohibitive since it often requires a large paired <table,description>, which is seldom available. This paper explores the problem of automatically learning reusable *templates* from paired and non-paired data. We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables. Our contributions include: a) we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information, in the latent spaces, and b) we utilize both small parallel data and large raw text without aligned tables to enrich the template learning. Experiments on datasets from a variety of different domains show that VTM is able generate more diversely while keeping a good fluency and quality.
The paper is interesting and proposes a novel approach for addressing a currently not largely considered problem. The proposed model is sound and appropriate, as it relies on state-of-the-art methodological arguments. The derivations are correct; this concerns both the model definition and the algorithmic derivations of model training and inference. The experimental evaluation is adequate: it compares to many popular approaches and on several datasets; the outcomes are convincing. It would be good if the authors could provide an analysis of the computational costs of their methods, as well as of the considered competitors.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVariational Template Machine for Data-to-Text Generation\nHow to generate descriptions from structured data organized in tables? Existing approaches using neural encoder-decoder models often suffer from lacking diversity. We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations.Learning such templates is prohibitive since it often requires a large paired <table,description>, which is seldom available. This paper explores the problem of automatically learning reusable *templates* from paired and non-paired data. We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables. Our contributions include: a) we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information, in the latent spaces, and b) we utilize both small parallel data and large raw text without aligned tables to enrich the template learning. Experiments on datasets from a variety of different domains show that VTM is able generate more diversely while keeping a good fluency and quality.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper is interesting and proposes a novel approach for addressing a currently not largely considered problem. The proposed model is sound and appropriate, as it relies on state-of-the-art methodological arguments. The derivations are correct; this concerns both the model definition and the algorithmic derivations of model training and inference. The experimental evaluation is adequate: it compares to many popular approaches and on several datasets; the outcomes are convincing. It would be good if the authors could provide an analysis of the computational costs of their methods, as well as of the considered competitors.", "role": "assistant" } ]
[ "The paper is interesting and proposes a novel approach for addressing a currently not largely considered problem.", "The proposed model is sound and appropriate, as it relies on state-of-the-art methodological arguments.", "The derivations are correct; this concerns both the model definition and the algorithmic derivations of model training and inference.", "The experimental evaluation is adequate: it compares to many popular approaches and on several datasets; the outcomes are convincing.", "It would be good if the authors could provide an analysis of the computational costs of their methods, as well as of the considered competitors." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 5 }
0
0
0.4
1
0.6
0
0.6
0.2
5
0
0
1
1
1
0
1
1
2.8
1.789924
1.010076
Expected Information Maximization: Using the I-Projection for Mixture Density Estimation
Modelling highly multi-modal data is a challenging problem in machine learning. Most algorithms are based on maximizing the likelihood, which corresponds to the M(oment)-projection of the data distribution to the model distribution. The M-Projection forces the model to average over modes that can not be represented by the model. In contrast, the I(information)-Projection ignores such modes in the data and concentrates on the modes the model can represent. Such behavior is appealing whenever we deal with highly multi-modal data where it is more important to model single modes correctly instead of covering all the modes. Despite this advantage, the I-projection is rarely used in practise due to the lack of algorithms that can efficiently optimize it based on data. In this work, we present a new algorithm called Expected Information Maximization (EIM) for computing the I-projection solely based on samples for general latent variable models, where we focus on Gaussian mixtures models and Gaussian mixture of experts. Our approach applies a variational upper bound to the I-projection objective which decomposes the original objective into single objectives for each mixture component as well as for the coefficients, allowing an efficient optimization. Similar to GANs, our approach also employs discriminators but uses a more stable optimization procedures optimizing a tight upper bound. We show that our algorithm is much more effective in computing the I-projection than recent GAN approaches and we illustrate the effectiveness of our approach for modelling multi-modal behavior on two pedestrian and traffic prediction datasets.
This paper propose EIM an analog to EM but to perform the I-projection (i.e. reverse-KL) instead of the usual M-projection for EM. The motivation is that the reverse-KL is mode-seeking in contrast to the forward-KL which is mode-covering. The authors argue that in the case that the model is mis-specified, I-projection is sometimes desired as to avoid putting mass on very unlikely regions of the space under the target p. The authors propose an iterative procedure that alternates between estimating likelihood ratios and proposal distribution by minimizing an upper bound on the reverse-KL. The derivations seem correct. There are some experiments, majoritarily in the robotics domain. As the author point out, likelihood shouldn*t be the right metric since you are now minimizing the reverse-KL---I would have liked the authors to spend some more time on the right way to evaluate---and actually use that new metric. Finally, there has been plethora of work on different objectives and distance between distributions as well as a zoo of lower/upper bounds on how to evaluate them---it would be interesting to have more connections to prior work. [Pros] - clearly written - clear motivation - correct derivations - interesting algorithm [Cons] - experiments are a little weak (and focus on a single domain) - would have liked to see an explicit algorithm for the optimization procedure - small lack of clarity in the presentation of Section 4.1---notation q_t is not introduced for example - more discussion about the evaluation metric - linking it more to prior work
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExpected Information Maximization: Using the I-Projection for Mixture Density Estimation\nModelling highly multi-modal data is a challenging problem in machine learning. Most algorithms are based on maximizing the likelihood, which corresponds to the M(oment)-projection of the data distribution to the model distribution. The M-Projection forces the model to average over modes that can not be represented by the model. In contrast, the I(information)-Projection ignores such modes in the data and concentrates on the modes the model can represent. Such behavior is appealing whenever we deal with highly multi-modal data where it is more important to model single modes correctly instead of covering all the modes. Despite this advantage, the I-projection is rarely used in practise due to the lack of algorithms that can efficiently optimize it based on data. In this work, we present a new algorithm called Expected Information Maximization (EIM) for computing the I-projection solely based on samples for general latent variable models, where we focus on Gaussian mixtures models and Gaussian mixture of experts. Our approach applies a variational upper bound to the I-projection objective which decomposes the original objective into single objectives for each mixture component as well as for the coefficients, allowing an efficient optimization. Similar to GANs, our approach also employs discriminators but uses a more stable optimization procedures optimizing a tight upper bound. We show that our algorithm is much more effective in computing the I-projection than recent GAN approaches and we illustrate the effectiveness of our approach for modelling multi-modal behavior on two pedestrian and traffic prediction datasets.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper propose EIM an analog to EM but to perform the I-projection (i.e. reverse-KL) instead of the usual M-projection for EM. The motivation is that the reverse-KL is mode-seeking in contrast to the forward-KL which is mode-covering. The authors argue that in the case that the model is mis-specified, I-projection is sometimes desired as to avoid putting mass on very unlikely regions of the space under the target p. The authors propose an iterative procedure that alternates between estimating likelihood ratios and proposal distribution by minimizing an upper bound on the reverse-KL. The derivations seem correct. There are some experiments, majoritarily in the robotics domain. As the author point out, likelihood shouldn*t be the right metric since you are now minimizing the reverse-KL---I would have liked the authors to spend some more time on the right way to evaluate---and actually use that new metric. Finally, there has been plethora of work on different objectives and distance between distributions as well as a zoo of lower/upper bounds on how to evaluate them---it would be interesting to have more connections to prior work. [Pros] - clearly written - clear motivation - correct derivations - interesting algorithm [Cons] - experiments are a little weak (and focus on a single domain) - would have liked to see an explicit algorithm for the optimization procedure - small lack of clarity in the presentation of Section 4.1---notation q_t is not introduced for example - more discussion about the evaluation metric - linking it more to prior work", "role": "assistant" } ]
[ "This paper propose EIM an analog to EM but to perform the I-projection (i.e.", "reverse-KL) instead of the usual M-projection for EM.", "The motivation is that the reverse-KL is mode-seeking in contrast to the forward-KL which is mode-covering.", "The authors argue that in the case that the model is mis-specified, I-projection is sometimes desired as to avoid putting mass on very unlikely regions of the space under the target p. The authors propose an iterative procedure that alternates between estimating likelihood ratios and proposal distribution by minimizing an upper bound on the reverse-KL.", "The derivations seem correct.", "There are some experiments, majoritarily in the robotics domain.", "As the author point out, likelihood shouldn*t be the right metric since you are now minimizing the reverse-KL---I would have liked the authors to spend some more time on the right way to evaluate---and actually use that new metric.", "Finally, there has been plethora of work on different objectives and distance between distributions as well as a zoo of lower/upper bounds on how to evaluate them---it would be interesting to have more connections to prior work.", "[Pros] - clearly written - clear motivation - correct derivations - interesting algorithm [Cons] - experiments are a little weak (and focus on a single domain) - would have liked to see an explicit algorithm for the optimization procedure - small lack of clarity in the presentation of Section 4.1---notation q_t is not introduced for example - more discussion about the evaluation metric - linking it more to prior work" ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 4, "suggestion_and_solution": 4, "total": 9 }
0.111111
0
0.111111
0.666667
0.111111
0.333333
0.444444
0.444444
9
1
0
1
1
1
1
1
1
2.222222
1.969703
0.252519
Transferring Optimality Across Data Distributions via Homotopy Methods
Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In this work, we propose a novel homotopy-based numerical method that can be used to transfer knowledge regarding the localization of an optimum across different task distributions in deep learning applications. We validate the proposed methodology with some empirical evaluations in the regression and classification scenarios, where it shows that superior numerical performance can be achieved in popular deep learning benchmarks, i.e. FashionMNIST, CIFAR-10, and draw connections with the widely used fine-tuning heuristic. In addition, we give more insights on the properties of a general homotopy method when used in combination with Stochastic Gradient Descent by conducting a theoretical analysis in a simplified setting.
Based on homotopy,, the paper describes a more rigorous approach to transfer learning than the so called ‘fine-tuning’ heuristic. Progress in the direction of more principled approaches for transfer learning would be tremendously impactful, since one of the core promises of deep learning is the learning of features, which can be used in different downstream tasks. Essentially, (if this reviewer understood this correctly) the idea behind this paper works by interpolation between the original task of interest and a potentially easier to optimize surrogate task. Overall, this reviewer found the concept simple and elegant, and well motivated, and also well introduced. However, since this reviewer does not have a formal background in mathematics, they cannot assess the soundness of the proofs. The paper tests the hypothesis by a simple function approximation regression task, and a classification task to learn to transfer from MNIST to fashion MNIST and MNIST to CIFAR, with promising results. One might argue that a more thorough evaluation would have been desirable, since the claims made by the paper are quite general, and it would have been in the authors’ best interest to present more thorough evidence that their concept works on wider scale of problems, ideally on an NLP task, given the current hype on pre-training with Transformer-based models. Previous work & citations: I would recommend to cite Schmidhuber 1987 (Evolutionary principles in self-referential learning) and Hochreiter et al 2001 (Learning to Learn with gradient descent) in the context of Meta learning. It would be nice to cite Klambauer et al (Self normalizing Networks) in the context of speeding up deep neural network training. The citations of the VGG paper is currently referenced by first names of the authors, not their last names, I am not sure if this was intended.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTransferring Optimality Across Data Distributions via Homotopy Methods\nHomotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In this work, we propose a novel homotopy-based numerical method that can be used to transfer knowledge regarding the localization of an optimum across different task distributions in deep learning applications. We validate the proposed methodology with some empirical evaluations in the regression and classification scenarios, where it shows that superior numerical performance can be achieved in popular deep learning benchmarks, i.e. FashionMNIST, CIFAR-10, and draw connections with the widely used fine-tuning heuristic. In addition, we give more insights on the properties of a general homotopy method when used in combination with Stochastic Gradient Descent by conducting a theoretical analysis in a simplified setting.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nBased on homotopy,, the paper describes a more rigorous approach to transfer learning than the so called ‘fine-tuning’ heuristic. Progress in the direction of more principled approaches for transfer learning would be tremendously impactful, since one of the core promises of deep learning is the learning of features, which can be used in different downstream tasks. Essentially, (if this reviewer understood this correctly) the idea behind this paper works by interpolation between the original task of interest and a potentially easier to optimize surrogate task. Overall, this reviewer found the concept simple and elegant, and well motivated, and also well introduced. However, since this reviewer does not have a formal background in mathematics, they cannot assess the soundness of the proofs. The paper tests the hypothesis by a simple function approximation regression task, and a classification task to learn to transfer from MNIST to fashion MNIST and MNIST to CIFAR, with promising results. One might argue that a more thorough evaluation would have been desirable, since the claims made by the paper are quite general, and it would have been in the authors’ best interest to present more thorough evidence that their concept works on wider scale of problems, ideally on an NLP task, given the current hype on pre-training with Transformer-based models. Previous work & citations: I would recommend to cite Schmidhuber 1987 (Evolutionary principles in self-referential learning) and Hochreiter et al 2001 (Learning to Learn with gradient descent) in the context of Meta learning. It would be nice to cite Klambauer et al (Self normalizing Networks) in the context of speeding up deep neural network training. The citations of the VGG paper is currently referenced by first names of the authors, not their last names, I am not sure if this was intended.", "role": "assistant" } ]
[ "Based on homotopy,, the paper describes a more rigorous approach to transfer learning than the so called ‘fine-tuning’ heuristic.", "Progress in the direction of more principled approaches for transfer learning would be tremendously impactful, since one of the core promises of deep learning is the learning of features, which can be used in different downstream tasks.", "Essentially, (if this reviewer understood this correctly) the idea behind this paper works by interpolation between the original task of interest and a potentially easier to optimize surrogate task.", "Overall, this reviewer found the concept simple and elegant, and well motivated, and also well introduced.", "However, since this reviewer does not have a formal background in mathematics, they cannot assess the soundness of the proofs.", "The paper tests the hypothesis by a simple function approximation regression task, and a classification task to learn to transfer from MNIST to fashion MNIST and MNIST to CIFAR, with promising results.", "One might argue that a more thorough evaluation would have been desirable, since the claims made by the paper are quite general, and it would have been in the authors’ best interest to present more thorough evidence that their concept works on wider scale of problems, ideally on an NLP task, given the current hype on pre-training with Transformer-based models.", "Previous work & citations: I would recommend to cite Schmidhuber 1987 (Evolutionary principles in self-referential learning) and Hochreiter et al 2001 (Learning to Learn with gradient descent) in the context of Meta learning.", "It would be nice to cite Klambauer et al (Self normalizing Networks) in the context of speeding up deep neural network training.", "The citations of the VGG paper is currently referenced by first names of the authors, not their last names, I am not sure if this was intended." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 7, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 3, "total": 10 }
0.2
0
0.7
0.5
0.2
0.3
0.1
0.3
10
1
0
1
1
1
1
1
1
2.3
2.157958
0.142042
Transferring Optimality Across Data Distributions via Homotopy Methods
Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In this work, we propose a novel homotopy-based numerical method that can be used to transfer knowledge regarding the localization of an optimum across different task distributions in deep learning applications. We validate the proposed methodology with some empirical evaluations in the regression and classification scenarios, where it shows that superior numerical performance can be achieved in popular deep learning benchmarks, i.e. FashionMNIST, CIFAR-10, and draw connections with the widely used fine-tuning heuristic. In addition, we give more insights on the properties of a general homotopy method when used in combination with Stochastic Gradient Descent by conducting a theoretical analysis in a simplified setting.
Contribution This paper proposes algorithm for transferring knowledge from easy -to-solved to complex tasks or from already solve to new tasks. It relies on homotopy functions and sequentially solves a sequence of optimization problems where the task distribution is gradually deformed from a source task to the target task. Theoretical guarantees are provided and proven in a strongly convex setting. The main results from the theory show that the distance between the final solution and its optimal are less or equal to relative to the distance of the initial source solution to its optimum. So a near optimal solution for the source task will lead to near optimal solution for the target task. Regression and Classification experimentations show competitive results compared to random and warm-start initialization schemes. Clarity Overall, the paper is well written, well motivated and well structured. The technical content is also very clear and excellent. Minor point: Seems that there is a notation error in proposition G.1 and its proof (i instead of i+1). Novelty The novelty in this work seems to be the application of homotopy methods to the transfer learning settings. The mathematical guarantees are also new and may even offer new ways to interpret fine tuning methods that have been so successful in recent literature. However, given the non-convexity of DNNs, it seems like the analysis in the non-convex settings and its implications should be part of the main text. Experiments: Overall, the experiments are very insightful but limited since you only show the training loss and the validation performance is not evaluated at all. Other things that would could be beneficial in better assessing the quality of your method are: comparison to Curriculum learning methods, more in depth analysis of the impact of k, and gamma in both regression and classification settings, and solving toy convex optimization problems to bridge the gap between theory and application. Preliminary rating: * Accept *
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTransferring Optimality Across Data Distributions via Homotopy Methods\nHomotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In this work, we propose a novel homotopy-based numerical method that can be used to transfer knowledge regarding the localization of an optimum across different task distributions in deep learning applications. We validate the proposed methodology with some empirical evaluations in the regression and classification scenarios, where it shows that superior numerical performance can be achieved in popular deep learning benchmarks, i.e. FashionMNIST, CIFAR-10, and draw connections with the widely used fine-tuning heuristic. In addition, we give more insights on the properties of a general homotopy method when used in combination with Stochastic Gradient Descent by conducting a theoretical analysis in a simplified setting.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nContribution This paper proposes algorithm for transferring knowledge from easy -to-solved to complex tasks or from already solve to new tasks. It relies on homotopy functions and sequentially solves a sequence of optimization problems where the task distribution is gradually deformed from a source task to the target task. Theoretical guarantees are provided and proven in a strongly convex setting. The main results from the theory show that the distance between the final solution and its optimal are less or equal to relative to the distance of the initial source solution to its optimum. So a near optimal solution for the source task will lead to near optimal solution for the target task. Regression and Classification experimentations show competitive results compared to random and warm-start initialization schemes. Clarity Overall, the paper is well written, well motivated and well structured. The technical content is also very clear and excellent. Minor point: Seems that there is a notation error in proposition G.1 and its proof (i instead of i+1). Novelty The novelty in this work seems to be the application of homotopy methods to the transfer learning settings. The mathematical guarantees are also new and may even offer new ways to interpret fine tuning methods that have been so successful in recent literature. However, given the non-convexity of DNNs, it seems like the analysis in the non-convex settings and its implications should be part of the main text. Experiments: Overall, the experiments are very insightful but limited since you only show the training loss and the validation performance is not evaluated at all. Other things that would could be beneficial in better assessing the quality of your method are: comparison to Curriculum learning methods, more in depth analysis of the impact of k, and gamma in both regression and classification settings, and solving toy convex optimization problems to bridge the gap between theory and application. Preliminary rating: * Accept *", "role": "assistant" } ]
[ "Contribution This paper proposes algorithm for transferring knowledge from easy -to-solved to complex tasks or from already solve to new tasks.", "It relies on homotopy functions and sequentially solves a sequence of optimization problems where the task distribution is gradually deformed from a source task to the target task.", "Theoretical guarantees are provided and proven in a strongly convex setting.", "The main results from the theory show that the distance between the final solution and its optimal are less or equal to relative to the distance of the initial source solution to its optimum.", "So a near optimal solution for the source task will lead to near optimal solution for the target task.", "Regression and Classification experimentations show competitive results compared to random and warm-start initialization schemes.", "Clarity Overall, the paper is well written, well motivated and well structured.", "The technical content is also very clear and excellent.", "Minor point: Seems that there is a notation error in proposition G.1 and its proof (i instead of i+1).", "Novelty The novelty in this work seems to be the application of homotopy methods to the transfer learning settings.", "The mathematical guarantees are also new and may even offer new ways to interpret fine tuning methods that have been so successful in recent literature.", "However, given the non-convexity of DNNs, it seems like the analysis in the non-convex settings and its implications should be part of the main text.", "Experiments: Overall, the experiments are very insightful but limited since you only show the training loss and the validation performance is not evaluated at all.", "Other things that would could be beneficial in better assessing the quality of your method are: comparison to Curriculum learning methods, more in depth analysis of the impact of k, and gamma in both regression and classification settings, and solving toy convex optimization problems to bridge the gap between theory and application.", "Preliminary rating: * Accept *" ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 11, "praise": 5, "presentation_and_reporting": 3, "results_and_discussion": 5, "suggestion_and_solution": 3, "total": 15 }
0.133333
0
0.2
0.733333
0.333333
0.2
0.333333
0.2
15
1
0
1
1
1
1
1
1
2.133333
2.070204
0.06313
Transferring Optimality Across Data Distributions via Homotopy Methods
Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In this work, we propose a novel homotopy-based numerical method that can be used to transfer knowledge regarding the localization of an optimum across different task distributions in deep learning applications. We validate the proposed methodology with some empirical evaluations in the regression and classification scenarios, where it shows that superior numerical performance can be achieved in popular deep learning benchmarks, i.e. FashionMNIST, CIFAR-10, and draw connections with the widely used fine-tuning heuristic. In addition, we give more insights on the properties of a general homotopy method when used in combination with Stochastic Gradient Descent by conducting a theoretical analysis in a simplified setting.
Authors propose a very general framework of Homotopy to the deep learning set up and explores a few relevant theoretical issues. Though the proposed idea is interesting, the depth and breadth of authors* presentation are simply lacking. The entire paper lacks focus and I suggest authors consider focusing on 1-2 well thought-out ideas. There are many 3-4 line long sentences that are hard to decipher. Please also consider making the presentation more accessible. Overall, this paper does not meet the bar for ICLR.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTransferring Optimality Across Data Distributions via Homotopy Methods\nHomotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In this work, we propose a novel homotopy-based numerical method that can be used to transfer knowledge regarding the localization of an optimum across different task distributions in deep learning applications. We validate the proposed methodology with some empirical evaluations in the regression and classification scenarios, where it shows that superior numerical performance can be achieved in popular deep learning benchmarks, i.e. FashionMNIST, CIFAR-10, and draw connections with the widely used fine-tuning heuristic. In addition, we give more insights on the properties of a general homotopy method when used in combination with Stochastic Gradient Descent by conducting a theoretical analysis in a simplified setting.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nAuthors propose a very general framework of Homotopy to the deep learning set up and explores a few relevant theoretical issues. Though the proposed idea is interesting, the depth and breadth of authors* presentation are simply lacking. The entire paper lacks focus and I suggest authors consider focusing on 1-2 well thought-out ideas. There are many 3-4 line long sentences that are hard to decipher. Please also consider making the presentation more accessible. Overall, this paper does not meet the bar for ICLR.", "role": "assistant" } ]
[ "Authors propose a very general framework of Homotopy to the deep learning set up and explores a few relevant theoretical issues.", "Though the proposed idea is interesting, the depth and breadth of authors* presentation are simply lacking.", "The entire paper lacks focus and I suggest authors consider focusing on 1-2 well thought-out ideas.", "There are many 3-4 line long sentences that are hard to decipher.", "Please also consider making the presentation more accessible.", "Overall, this paper does not meet the bar for ICLR." ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 0, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 0, "suggestion_and_solution": 2, "total": 6 }
0.666667
0.166667
0.333333
0
0.166667
0.5
0
0.333333
6
1
1
1
0
1
1
0
1
2.166667
1.393328
0.773339
Provable Filter Pruning for Efficient Neural Networks
We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network. Our algorithm uses a small batch of input data points to assign a saliency score for each filter and constructs an importance sampling distribution where filters that highly affect the output are sampled with correspondingly high probability. Unlike weight pruning approaches that lead to irregular sparsity patterns -- requiring specialized libraries or hardware to enable computational speedups -- our approach compresses the original network to a slimmer subnetwork, which enables accelerated inference with any off-the-shelf deep learning library and hardware. Existing filter pruning methods are generally data-oblivious, rely on heuristics for evaluating the parameter importance, or require manual and tedious hyper-parameter tuning. In contrast, our method is data-informed, exhibits provable guarantees on the size and performance of the pruned network, and is widely applicable to varying network architectures and data sets. Our analytical bounds bridge the notions of compressibility and importance of network structures, which gives rise to a fully-automated procedure for identifying and preserving the filters in layers that are essential to the network*s performance. Our experimental results across varying pruning scenarios show that our algorithm consistently generates sparser and more efficient models than those generated by existing filter pruning approaches.
Summary: In this paper, the author propose a provable pruning method, and also provide a bound for the final pruning error. Among most heuristics prune method, pruning with mathematics guarantee is indeed more convincing. We expect this work can help people devoting some effort into more solid theoretical study in understanding the over-parameterized training. Intuitively speaking, the sensitive neuron has greater contribution for the final output, reusing the corresponding filter and carefully rescale its value require many empirically attempts. To achieve a more reasonable algorithm, author prune the redundant channel by controlling the deviation of the summation statistically small, and reusing the filter by important sampling the given channel. Experiment show that this method can reach a competitive prune radio against other pruning algorithm, and show robustly in retained parameters vs error experiment. Weakness: 1. experiment is too weak ImageNet model has great impact on most CV problem, and the current release models are flooding in the open source world. Author should at least provide a imagenet model and make this work more convincing. Besides, Author should also consider an experiment in modern lightweight network, vgg and resnet like model are out of fashion and so big that any one can make a sound result on it. 2. lack of a comparing experiment for random select the top-k norm. Important sampling require an input of probability [p1, p2, p3, ... pn], if those probabilities are nearly uniform, important sampling will behave like a random sampling method. In most case, if we want to prune the large channel network, picking the top-1 significant filter or random sampling top-k filter will almost do the same thing. 3. lack of further theory consideration author only consider the single layer reconstruction, without discussing the overall accumulative error. Unlike the other deterministic method, sampling skill suffer variance propagation problem, the pre-layer variance will affect the sampling probability of next layer, how this pruning work if we change status of the pre-layer, I didn*t find any theoretical guarantee and only find a proof of single layer reconstruction bound.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nProvable Filter Pruning for Efficient Neural Networks\nWe present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network. Our algorithm uses a small batch of input data points to assign a saliency score for each filter and constructs an importance sampling distribution where filters that highly affect the output are sampled with correspondingly high probability. Unlike weight pruning approaches that lead to irregular sparsity patterns -- requiring specialized libraries or hardware to enable computational speedups -- our approach compresses the original network to a slimmer subnetwork, which enables accelerated inference with any off-the-shelf deep learning library and hardware. Existing filter pruning methods are generally data-oblivious, rely on heuristics for evaluating the parameter importance, or require manual and tedious hyper-parameter tuning. In contrast, our method is data-informed, exhibits provable guarantees on the size and performance of the pruned network, and is widely applicable to varying network architectures and data sets. Our analytical bounds bridge the notions of compressibility and importance of network structures, which gives rise to a fully-automated procedure for identifying and preserving the filters in layers that are essential to the network*s performance. Our experimental results across varying pruning scenarios show that our algorithm consistently generates sparser and more efficient models than those generated by existing filter pruning approaches.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary: In this paper, the author propose a provable pruning method, and also provide a bound for the final pruning error. Among most heuristics prune method, pruning with mathematics guarantee is indeed more convincing. We expect this work can help people devoting some effort into more solid theoretical study in understanding the over-parameterized training. Intuitively speaking, the sensitive neuron has greater contribution for the final output, reusing the corresponding filter and carefully rescale its value require many empirically attempts. To achieve a more reasonable algorithm, author prune the redundant channel by controlling the deviation of the summation statistically small, and reusing the filter by important sampling the given channel. Experiment show that this method can reach a competitive prune radio against other pruning algorithm, and show robustly in retained parameters vs error experiment. Weakness: 1. experiment is too weak ImageNet model has great impact on most CV problem, and the current release models are flooding in the open source world. Author should at least provide a imagenet model and make this work more convincing. Besides, Author should also consider an experiment in modern lightweight network, vgg and resnet like model are out of fashion and so big that any one can make a sound result on it. 2. lack of a comparing experiment for random select the top-k norm. Important sampling require an input of probability [p1, p2, p3, ... pn], if those probabilities are nearly uniform, important sampling will behave like a random sampling method. In most case, if we want to prune the large channel network, picking the top-1 significant filter or random sampling top-k filter will almost do the same thing. 3. lack of further theory consideration author only consider the single layer reconstruction, without discussing the overall accumulative error. Unlike the other deterministic method, sampling skill suffer variance propagation problem, the pre-layer variance will affect the sampling probability of next layer, how this pruning work if we change status of the pre-layer, I didn*t find any theoretical guarantee and only find a proof of single layer reconstruction bound.", "role": "assistant" } ]
[ "Summary: In this paper, the author propose a provable pruning method, and also provide a bound for the final pruning error.", "Among most heuristics prune method, pruning with mathematics guarantee is indeed more convincing.", "We expect this work can help people devoting some effort into more solid theoretical study in understanding the over-parameterized training.", "Intuitively speaking, the sensitive neuron has greater contribution for the final output, reusing the corresponding filter and carefully rescale its value require many empirically attempts.", "To achieve a more reasonable algorithm, author prune the redundant channel by controlling the deviation of the summation statistically small, and reusing the filter by important sampling the given channel.", "Experiment show that this method can reach a competitive prune radio against other pruning algorithm, and show robustly in retained parameters vs error experiment.", "Weakness: 1. experiment is too weak ImageNet model has great impact on most CV problem, and the current release models are flooding in the open source world.", "Author should at least provide a imagenet model and make this work more convincing.", "Besides, Author should also consider an experiment in modern lightweight network, vgg and resnet like model are out of fashion and so big that any one can make a sound result on it.", "2. lack of a comparing experiment for random select the top-k norm.", "Important sampling require an input of probability [p1, p2, p3, ... pn], if those probabilities are nearly uniform, important sampling will behave like a random sampling method.", "In most case, if we want to prune the large channel network, picking the top-1 significant filter or random sampling top-k filter will almost do the same thing.", "3. lack of further theory consideration author only consider the single layer reconstruction, without discussing the overall accumulative error.", "Unlike the other deterministic method, sampling skill suffer variance propagation problem, the pre-layer variance will affect the sampling probability of next layer, how this pruning work if we change status of the pre-layer, I didn*t find any theoretical guarantee and only find a proof of single layer reconstruction bound." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 14, "praise": 0, "presentation_and_reporting": 1, "results_and_discussion": 8, "suggestion_and_solution": 4, "total": 14 }
0.285714
0
0.142857
1
0
0.071429
0.571429
0.285714
14
1
0
1
1
0
1
1
1
2.357143
2.34136
0.015782
Lipschitz constant estimation for Neural Networks via sparse polynomial optimization
We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks. The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming. We show how to use structural properties of the network, such as sparsity, to significantly reduce the complexity of computation. This is specially useful for convolutional as well as pruned neural networks. We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the -Lipschitz constant, our approach yields superior estimates as compared to other baselines available in the literature.
In this paper, the authors introduce a framework for computing upper bounds on the Lipschitz constant for neural nets. The main contribution of the paper is to leverage the sparsity properties of typical feed-forward neural nets to reduce the computational complexity of the algorithm. Through experiments, the authors show that the proposed algorithm computes tighter Lipschitz bounds compared to baselines. The approach proposed in the paper looks interesting. Although the presentation can be made clearer in places. For example, in equation (4), it would be helpful to explicitly state over which parameters the max is taken. There*s also a number of small typos that need to be fixed. For example: *We refer to d as the depth, and we we focus on the case where fd has a single real value as output.* on page 1. I found the proposed algorithm and the discussions in Section 2 and 3 interesting, although I am not familiar enough with the literature on polynomial optimization to evaluate whether there is any significantly new idea presented in these sections. I found section 4 very interesting too, and very important towards making the algorithm actually computationally tractable. I have a couple of concerns with the rest of the paper however, which `I describe below: 1. It is nice that upper bounds for the local Lipschitz constant can be incorporated easily into the formulation. I would have liked to see some experiments on evaluating local Lipschitz constants though, and how they compare with other methods, since this is a very popular setting in which such techniques are used nowadays. 2. The paper overall I think would benefit from a better experimental evaluation. It would be interesting to see how much the sparsity pattern in convnets affect results compared to other baselines. It would also be interesting to see how the bound degrades as the network grows bigger, and in particular as the depth increases. Given the lack of thorough experiments in the paper, I am giving the paper a borderline rating. I am however willing to increase my score based on discussions with the authors and other reviewers. =================================== Edit after rebuttal: The latest draft addresses most of my concerns, and I am happy to recommend accepting this paper now.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLipschitz constant estimation for Neural Networks via sparse polynomial optimization\nWe introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks. The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming. We show how to use structural properties of the network, such as sparsity, to significantly reduce the complexity of computation. This is specially useful for convolutional as well as pruned neural networks. We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the -Lipschitz constant, our approach yields superior estimates as compared to other baselines available in the literature.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nIn this paper, the authors introduce a framework for computing upper bounds on the Lipschitz constant for neural nets. The main contribution of the paper is to leverage the sparsity properties of typical feed-forward neural nets to reduce the computational complexity of the algorithm. Through experiments, the authors show that the proposed algorithm computes tighter Lipschitz bounds compared to baselines. The approach proposed in the paper looks interesting. Although the presentation can be made clearer in places. For example, in equation (4), it would be helpful to explicitly state over which parameters the max is taken. There*s also a number of small typos that need to be fixed. For example: *We refer to d as the depth, and we we focus on the case where fd has a single real value as output.* on page 1. I found the proposed algorithm and the discussions in Section 2 and 3 interesting, although I am not familiar enough with the literature on polynomial optimization to evaluate whether there is any significantly new idea presented in these sections. I found section 4 very interesting too, and very important towards making the algorithm actually computationally tractable. I have a couple of concerns with the rest of the paper however, which `I describe below: 1. It is nice that upper bounds for the local Lipschitz constant can be incorporated easily into the formulation. I would have liked to see some experiments on evaluating local Lipschitz constants though, and how they compare with other methods, since this is a very popular setting in which such techniques are used nowadays. 2. The paper overall I think would benefit from a better experimental evaluation. It would be interesting to see how much the sparsity pattern in convnets affect results compared to other baselines. It would also be interesting to see how the bound degrades as the network grows bigger, and in particular as the depth increases. Given the lack of thorough experiments in the paper, I am giving the paper a borderline rating. I am however willing to increase my score based on discussions with the authors and other reviewers. =================================== Edit after rebuttal: The latest draft addresses most of my concerns, and I am happy to recommend accepting this paper now.", "role": "assistant" } ]
[ "In this paper, the authors introduce a framework for computing upper bounds on the Lipschitz constant for neural nets.", "The main contribution of the paper is to leverage the sparsity properties of typical feed-forward neural nets to reduce the computational complexity of the algorithm.", "Through experiments, the authors show that the proposed algorithm computes tighter Lipschitz bounds compared to baselines.", "The approach proposed in the paper looks interesting.", "Although the presentation can be made clearer in places.", "For example, in equation (4), it would be helpful to explicitly state over which parameters the max is taken.", "There*s also a number of small typos that need to be fixed.", "For example: *We refer to d as the depth, and we we focus on the case where fd has a single real value as output.", "* on page 1.", "I found the proposed algorithm and the discussions in Section 2 and 3 interesting, although I am not familiar enough with the literature on polynomial optimization to evaluate whether there is any significantly new idea presented in these sections.", "I found section 4 very interesting too, and very important towards making the algorithm actually computationally tractable.", "I have a couple of concerns with the rest of the paper however, which `I describe below: 1.", "It is nice that upper bounds for the local Lipschitz constant can be incorporated easily into the formulation.", "I would have liked to see some experiments on evaluating local Lipschitz constants though, and how they compare with other methods, since this is a very popular setting in which such techniques are used nowadays.", "2.", "The paper overall I think would benefit from a better experimental evaluation.", "It would be interesting to see how much the sparsity pattern in convnets affect results compared to other baselines.", "It would also be interesting to see how the bound degrades as the network grows bigger, and in particular as the depth increases.", "Given the lack of thorough experiments in the paper, I am giving the paper a borderline rating.", "I am however willing to increase my score based on discussions with the authors and other reviewers.", "=================================== Edit after rebuttal: The latest draft addresses most of my concerns, and I am happy to recommend accepting this paper now." ]
{ "criticism": 3, "example": 3, "importance_and_relevance": 5, "materials_and_methods": 12, "praise": 6, "presentation_and_reporting": 4, "results_and_discussion": 5, "suggestion_and_solution": 8, "total": 21 }
0.142857
0.142857
0.238095
0.571429
0.285714
0.190476
0.238095
0.380952
21
1
1
1
1
1
1
1
1
2.190476
1.180401
1.010076
Lipschitz constant estimation for Neural Networks via sparse polynomial optimization
We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks. The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming. We show how to use structural properties of the network, such as sparsity, to significantly reduce the complexity of computation. This is specially useful for convolutional as well as pruned neural networks. We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the -Lipschitz constant, our approach yields superior estimates as compared to other baselines available in the literature.
The authors study the problem of estimating the Lipschitz constant of a deep neural network with ELO activation function. The authors formulate the problem as a polynomial optimisation problem, which is elegant. Subsequently, they utilise an LP hierarchy based on the Krivine-Vasilescu-Handelman’s Positivstellensatz and suggest exploiting sparsity therein. The computational results are clearly not sufficient to apply this approach to real-world neural networks, but are still respectable. Section 3 (Theorem 2) is not original work, as leaving the theorem without a reference would imply: the authors cite Section 9 of Lasserre*s 2015 book later, so they are clearly aware of this, and there are many application even within verification, e.g., https://link.springer.com/content/pdf/10.1007%2F978-3-319-48989-6_44.pdf https://ieeexplore.ieee.org/document/8493559 The suggestions as to the exploitation of sparsity (Section 4) are not original work either. The authors could cite, e.g., JB Lasserre: Convergent SDP-relaxations in polynomial optimization with sparsity (SIAM Journal on Optimization, 2006), as one of the early proponents of the exploitation of sparsity. In Section 7: -- The claim *We observed clear improvement of the Lipschitz bound obtained, compared to the SDP method* is not supported by the results the authors present. -- The authors do not present the run-time. This needs to be included, considering they imply that the key improvement over the traditional SDP is that this works with smaller variables and should be faster. -- The presentation of the experimental results should be improved, so as to follow the NIPS reproducibility checklist, or at least have error bars at one standard deviation and standard deviation in the table. Other than that, the paper is well written (modulo Section missing in *Section 5* at the top of Section 7), and I would recommend its acceptance.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLipschitz constant estimation for Neural Networks via sparse polynomial optimization\nWe introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks. The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming. We show how to use structural properties of the network, such as sparsity, to significantly reduce the complexity of computation. This is specially useful for convolutional as well as pruned neural networks. We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the -Lipschitz constant, our approach yields superior estimates as compared to other baselines available in the literature.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors study the problem of estimating the Lipschitz constant of a deep neural network with ELO activation function. The authors formulate the problem as a polynomial optimisation problem, which is elegant. Subsequently, they utilise an LP hierarchy based on the Krivine-Vasilescu-Handelman’s Positivstellensatz and suggest exploiting sparsity therein. The computational results are clearly not sufficient to apply this approach to real-world neural networks, but are still respectable. Section 3 (Theorem 2) is not original work, as leaving the theorem without a reference would imply: the authors cite Section 9 of Lasserre*s 2015 book later, so they are clearly aware of this, and there are many application even within verification, e.g., https://link.springer.com/content/pdf/10.1007%2F978-3-319-48989-6_44.pdf https://ieeexplore.ieee.org/document/8493559 The suggestions as to the exploitation of sparsity (Section 4) are not original work either. The authors could cite, e.g., JB Lasserre: Convergent SDP-relaxations in polynomial optimization with sparsity (SIAM Journal on Optimization, 2006), as one of the early proponents of the exploitation of sparsity. In Section 7: -- The claim *We observed clear improvement of the Lipschitz bound obtained, compared to the SDP method* is not supported by the results the authors present. -- The authors do not present the run-time. This needs to be included, considering they imply that the key improvement over the traditional SDP is that this works with smaller variables and should be faster. -- The presentation of the experimental results should be improved, so as to follow the NIPS reproducibility checklist, or at least have error bars at one standard deviation and standard deviation in the table. Other than that, the paper is well written (modulo Section missing in *Section 5* at the top of Section 7), and I would recommend its acceptance.", "role": "assistant" } ]
[ "The authors study the problem of estimating the Lipschitz constant of a deep neural network with ELO activation function.", "The authors formulate the problem as a polynomial optimisation problem, which is elegant.", "Subsequently, they utilise an LP hierarchy based on the Krivine-Vasilescu-Handelman’s Positivstellensatz and suggest exploiting sparsity therein.", "The computational results are clearly not sufficient to apply this approach to real-world neural networks, but are still respectable.", "Section 3 (Theorem 2) is not original work, as leaving the theorem without a reference would imply: the authors cite Section 9 of Lasserre*s 2015 book later, so they are clearly aware of this, and there are many application even within verification, e.g., https://link.springer.com/content/pdf/10.1007%2F978-3-319-48989-6_44.pdf https://ieeexplore.ieee.org/document/8493559 The suggestions as to the exploitation of sparsity (Section 4) are not original work either.", "The authors could cite, e.g., JB Lasserre: Convergent SDP-relaxations in polynomial optimization with sparsity (SIAM Journal on Optimization, 2006), as one of the early proponents of the exploitation of sparsity.", "In Section 7: -- The claim *We observed clear improvement of the Lipschitz bound obtained, compared to the SDP method* is not supported by the results the authors present.", "-- The authors do not present the run-time.", "This needs to be included, considering they imply that the key improvement over the traditional SDP is that this works with smaller variables and should be faster.", "-- The presentation of the experimental results should be improved, so as to follow the NIPS reproducibility checklist, or at least have error bars at one standard deviation and standard deviation in the table.", "Other than that, the paper is well written (modulo Section missing in *Section 5* at the top of Section 7), and I would recommend its acceptance." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 4, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 11 }
0.363636
0
0.181818
0.454545
0.272727
0.363636
0.272727
0.272727
11
1
0
1
1
1
1
1
1
2.181818
2.118688
0.06313
Classification-Based Anomaly Detection for General Data
Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence. Recently, classification-based methods were shown to achieve superior results on this task. In this work, we present a unifying view and propose an open-set method to relax current generalization assumptions. Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations. Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types. The strong performance of our method is extensively validated on multiple datasets from different domains.
This paper proposes a novel approach to classification-based anomaly detection for general data. Classification-based anomaly detection uses auxiliary tasks (transformations) to train a model to extract useful features from the data. This approach is well-known in image data, where auxiliary tasks such as classification of rotated or flipped images have been demonstrated to work effectively. The paper generalizes to the task by using the affine transformation y = Wx+b. A novel distance-based classification is also devised to learn the model in such as way that it generalizes to unseen data. This is achieved by modeling the each auxiliary task subspace by a sphere and by using the distance to the center for the calculation of the loss function. The anomaly score then becomes the product of the probabilities that the transformed samples are in their respective subspaces. The paper provides comparison to SOT methods for both Cifar10 and 4 non-image datasets. The proposed method substantially outperforms SOT on all datasets. A section is devoted to explore the benefits of this approach on adversarial attacks using PGD. It is shown that random transformations (implemented with the affine transformation and a random matrix) do increase the robustness of the models by 50%. Another section is devoted to studying the effect of contamination (anomaly data in the training set). The approach is shown to degrade more gracefully than DAGMM on KDDCUP99. Finally, a section studies the effect of the number of tasks on the performance, showing that after a certain number of task (which is probably problem-dependent), the accuracy stabilizes. PROS: * A general and novel approach to anomaly detection with SOT results. * The method allows for any type of classifier to be used. The authors note that deep models perform well on the large datasets (KDDCUP) while shallower models are sufficient for smaller datasets. * The paper is relatively well written and easy to follow, the math is clearly laid out. CONS: * The lack of a pseudo-code algorithm makes it hard to understand and reproduce the method * Figure 1 (left) has inverted colors (DAGMM should be blue - higher error). * Figure 1 (right) - it is unclear what the scale of the x-axis is since there is only 1 label. Also the tick marks seem spaced logarithmically, which, if i understand correctly, is wrong. * The paragraph *Number of operations* should be renamed *Number of tasks* to be consistent. Also the sentence *From 16 ...* should be clarified, as it seems to contrast accuracy and results, which are the same entity. The concept of *stability of results* is not explained clearly. It would suffice to say: *From 16 tasks and larger, the accuracy remains stable*. * In section 6, the paragraph *Generating many tasks* should be named *Number of tasks*, to be consistent with the corresponding paragraph in section 5.2. Also the first sentence should be: *As illustrated in Figure 1 (right), increasing the number of tasks does result in improved performance but the trend is not linear and beyond a certain threshold, no improvements are made. And again the concept of *stability* is somewhat misleading here. The sentence *...it mainly improves the stability of the results* is wrong. The stability is not improved, it is just that the performance trend is stable. * The study on the number of tasks should be carried on several datasets. Only one dataset is too few to make any claims on the accuracy trends as the number of task is increased. * The authors should coin an acronym to name their methods. Overall this paper provides a novel approach to classification-based semi-supervised anomaly detection of general data. The results are very encouraging, beating SOT methods by a good margin on standard benchmarks.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nClassification-Based Anomaly Detection for General Data\nAnomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence. Recently, classification-based methods were shown to achieve superior results on this task. In this work, we present a unifying view and propose an open-set method to relax current generalization assumptions. Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations. Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types. The strong performance of our method is extensively validated on multiple datasets from different domains.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a novel approach to classification-based anomaly detection for general data. Classification-based anomaly detection uses auxiliary tasks (transformations) to train a model to extract useful features from the data. This approach is well-known in image data, where auxiliary tasks such as classification of rotated or flipped images have been demonstrated to work effectively. The paper generalizes to the task by using the affine transformation y = Wx+b. A novel distance-based classification is also devised to learn the model in such as way that it generalizes to unseen data. This is achieved by modeling the each auxiliary task subspace by a sphere and by using the distance to the center for the calculation of the loss function. The anomaly score then becomes the product of the probabilities that the transformed samples are in their respective subspaces. The paper provides comparison to SOT methods for both Cifar10 and 4 non-image datasets. The proposed method substantially outperforms SOT on all datasets. A section is devoted to explore the benefits of this approach on adversarial attacks using PGD. It is shown that random transformations (implemented with the affine transformation and a random matrix) do increase the robustness of the models by 50%. Another section is devoted to studying the effect of contamination (anomaly data in the training set). The approach is shown to degrade more gracefully than DAGMM on KDDCUP99. Finally, a section studies the effect of the number of tasks on the performance, showing that after a certain number of task (which is probably problem-dependent), the accuracy stabilizes. PROS: * A general and novel approach to anomaly detection with SOT results. * The method allows for any type of classifier to be used. The authors note that deep models perform well on the large datasets (KDDCUP) while shallower models are sufficient for smaller datasets. * The paper is relatively well written and easy to follow, the math is clearly laid out. CONS: * The lack of a pseudo-code algorithm makes it hard to understand and reproduce the method * Figure 1 (left) has inverted colors (DAGMM should be blue - higher error). * Figure 1 (right) - it is unclear what the scale of the x-axis is since there is only 1 label. Also the tick marks seem spaced logarithmically, which, if i understand correctly, is wrong. * The paragraph *Number of operations* should be renamed *Number of tasks* to be consistent. Also the sentence *From 16 ...* should be clarified, as it seems to contrast accuracy and results, which are the same entity. The concept of *stability of results* is not explained clearly. It would suffice to say: *From 16 tasks and larger, the accuracy remains stable*. * In section 6, the paragraph *Generating many tasks* should be named *Number of tasks*, to be consistent with the corresponding paragraph in section 5.2. Also the first sentence should be: *As illustrated in Figure 1 (right), increasing the number of tasks does result in improved performance but the trend is not linear and beyond a certain threshold, no improvements are made. And again the concept of *stability* is somewhat misleading here. The sentence *...it mainly improves the stability of the results* is wrong. The stability is not improved, it is just that the performance trend is stable. * The study on the number of tasks should be carried on several datasets. Only one dataset is too few to make any claims on the accuracy trends as the number of task is increased. * The authors should coin an acronym to name their methods. Overall this paper provides a novel approach to classification-based semi-supervised anomaly detection of general data. The results are very encouraging, beating SOT methods by a good margin on standard benchmarks.", "role": "assistant" } ]
[ "This paper proposes a novel approach to classification-based anomaly detection for general data.", "Classification-based anomaly detection uses auxiliary tasks (transformations) to train a model to extract useful features from the data.", "This approach is well-known in image data, where auxiliary tasks such as classification of rotated or flipped images have been demonstrated to work effectively.", "The paper generalizes to the task by using the affine transformation y = Wx+b.", "A novel distance-based classification is also devised to learn the model in such as way that it generalizes to unseen data.", "This is achieved by modeling the each auxiliary task subspace by a sphere and by using the distance to the center for the calculation of the loss function.", "The anomaly score then becomes the product of the probabilities that the transformed samples are in their respective subspaces.", "The paper provides comparison to SOT methods for both Cifar10 and 4 non-image datasets.", "The proposed method substantially outperforms SOT on all datasets.", "A section is devoted to explore the benefits of this approach on adversarial attacks using PGD.", "It is shown that random transformations (implemented with the affine transformation and a random matrix) do increase the robustness of the models by 50%.", "Another section is devoted to studying the effect of contamination (anomaly data in the training set).", "The approach is shown to degrade more gracefully than DAGMM on KDDCUP99.", "Finally, a section studies the effect of the number of tasks on the performance, showing that after a certain number of task (which is probably problem-dependent), the accuracy stabilizes.", "PROS: * A general and novel approach to anomaly detection with SOT results.", "* The method allows for any type of classifier to be used.", "The authors note that deep models perform well on the large datasets (KDDCUP) while shallower models are sufficient for smaller datasets.", "* The paper is relatively well written and easy to follow, the math is clearly laid out.", "CONS: * The lack of a pseudo-code algorithm makes it hard to understand and reproduce the method * Figure 1 (left) has inverted colors (DAGMM should be blue - higher error).", "* Figure 1 (right) - it is unclear what the scale of the x-axis is since there is only 1 label.", "Also the tick marks seem spaced logarithmically, which, if i understand correctly, is wrong.", "* The paragraph *Number of operations* should be renamed *Number of tasks* to be consistent.", "Also the sentence *From 16 ...* should be clarified, as it seems to contrast accuracy and results, which are the same entity.", "The concept of *stability of results* is not explained clearly.", "It would suffice to say: *From 16 tasks and larger, the accuracy remains stable*.", "* In section 6, the paragraph *Generating many tasks* should be named *Number of tasks*, to be consistent with the corresponding paragraph in section 5.2.", "Also the first sentence should be: *As illustrated in Figure 1 (right), increasing the number of tasks does result in improved performance but the trend is not linear and beyond a certain threshold, no improvements are made.", "And again the concept of *stability* is somewhat misleading here.", "The sentence *...it mainly improves the stability of the results* is wrong.", "The stability is not improved, it is just that the performance trend is stable.", "* The study on the number of tasks should be carried on several datasets.", "Only one dataset is too few to make any claims on the accuracy trends as the number of task is increased.", "* The authors should coin an acronym to name their methods.", "Overall this paper provides a novel approach to classification-based semi-supervised anomaly detection of general data.", "The results are very encouraging, beating SOT methods by a good margin on standard benchmarks." ]
{ "criticism": 9, "example": 7, "importance_and_relevance": 4, "materials_and_methods": 23, "praise": 5, "presentation_and_reporting": 14, "results_and_discussion": 10, "suggestion_and_solution": 9, "total": 35 }
0.257143
0.2
0.114286
0.657143
0.142857
0.4
0.285714
0.257143
35
1
1
1
1
1
1
1
1
2.314286
-5.32441
7.638696
Scale-Equivariant Steerable Networks
The effectiveness of Convolutional Neural Networks (CNNs) has been substantially attributed to their built-in property of translation equivariance. However, CNNs do not have embedded mechanisms to handle other types of transformations. In this work, we pay attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera. First, we introduce the general theory for building scale-equivariant convolutional networks with steerable filters. We develop scale-convolution and generalize other common blocks to be scale-equivariant. We demonstrate the computational efficiency and numerical stability of the proposed method. We compare the proposed models to the previously developed methods for scale equivariance and local scale invariance. We demonstrate state-of-the-art results on MNIST-scale dataset. Finally, we demonstrate that the proposed scale-equivariant convolutions show remarkable gains on STL-10 when used as drop-in replacements for non-equivariant convolutional layers.
The paper describes a method for integrating scale equivariance into convolutional networks using steerable filters. After developing the theory using continuous scale and translation space, a discretized implementation using a fixed set of steerable basis elements is described. Experiments are performed measuring the error from true equivariance, varying number of layers, image scale and scales in scale interactions. The method is evaluated using MNIST-scale and STL-10, with convincing results on MNIST-scale and bit less convincing but still good results on STL-10. Overall, I think this is a nice paper with generally good explanations and experiments probing the behavior. I would have liked to see more probing into the effects of number and distance between scales. Table 1 and corresponding text say that a significant advantage of the approach is that it can handle arbitrary scale values, but there was no explicit exploration of the effects of using this beyond one set of scales per experiment/dataset. What scale values can be sampled, which work best, and why? Also, while the MNIST-scale experiment seems convincing, I think the STL-10 is a bit less (but still OK): Although the method outperforms other methods and appropriate baseline models, it*s a little disappointing that pooling over scales (which I would would convert the equivariance to invariance) is best, and inter-scale interactions increase error. (Perhaps this is not too surprising in retrospect, as images may have limited scale variation from camera position in this dataset, but significant within-class viewpoint variation.) Even so, I still find the method concise and of interest, with the basics evaluated, even if some of its unique advantages may have been better explored. Additional Questions: * Inter-scale interaction could be elaborated a bit more. End of sec 4 says, *use convHH for each scale sequentially and .. sum*. I believe this is sequencing over scales in the kernel; explaining a bit better how this is implemented, including the shape of w in this case, would be helpful. * Which scales were chosen for the fixed basis? How large in spatial extent are the kernels in the basis elements, at each scale? * In the implementation, what is the value of V (sampled 2d conv kernel size)?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nScale-Equivariant Steerable Networks\nThe effectiveness of Convolutional Neural Networks (CNNs) has been substantially attributed to their built-in property of translation equivariance. However, CNNs do not have embedded mechanisms to handle other types of transformations. In this work, we pay attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera. First, we introduce the general theory for building scale-equivariant convolutional networks with steerable filters. We develop scale-convolution and generalize other common blocks to be scale-equivariant. We demonstrate the computational efficiency and numerical stability of the proposed method. We compare the proposed models to the previously developed methods for scale equivariance and local scale invariance. We demonstrate state-of-the-art results on MNIST-scale dataset. Finally, we demonstrate that the proposed scale-equivariant convolutions show remarkable gains on STL-10 when used as drop-in replacements for non-equivariant convolutional layers.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper describes a method for integrating scale equivariance into convolutional networks using steerable filters. After developing the theory using continuous scale and translation space, a discretized implementation using a fixed set of steerable basis elements is described. Experiments are performed measuring the error from true equivariance, varying number of layers, image scale and scales in scale interactions. The method is evaluated using MNIST-scale and STL-10, with convincing results on MNIST-scale and bit less convincing but still good results on STL-10. Overall, I think this is a nice paper with generally good explanations and experiments probing the behavior. I would have liked to see more probing into the effects of number and distance between scales. Table 1 and corresponding text say that a significant advantage of the approach is that it can handle arbitrary scale values, but there was no explicit exploration of the effects of using this beyond one set of scales per experiment/dataset. What scale values can be sampled, which work best, and why? Also, while the MNIST-scale experiment seems convincing, I think the STL-10 is a bit less (but still OK): Although the method outperforms other methods and appropriate baseline models, it*s a little disappointing that pooling over scales (which I would would convert the equivariance to invariance) is best, and inter-scale interactions increase error. (Perhaps this is not too surprising in retrospect, as images may have limited scale variation from camera position in this dataset, but significant within-class viewpoint variation.) Even so, I still find the method concise and of interest, with the basics evaluated, even if some of its unique advantages may have been better explored. Additional Questions: * Inter-scale interaction could be elaborated a bit more. End of sec 4 says, *use convHH for each scale sequentially and .. sum*. I believe this is sequencing over scales in the kernel; explaining a bit better how this is implemented, including the shape of w in this case, would be helpful. * Which scales were chosen for the fixed basis? How large in spatial extent are the kernels in the basis elements, at each scale? * In the implementation, what is the value of V (sampled 2d conv kernel size)?", "role": "assistant" } ]
[ "The paper describes a method for integrating scale equivariance into convolutional networks using steerable filters.", "After developing the theory using continuous scale and translation space, a discretized implementation using a fixed set of steerable basis elements is described.", "Experiments are performed measuring the error from true equivariance, varying number of layers, image scale and scales in scale interactions.", "The method is evaluated using MNIST-scale and STL-10, with convincing results on MNIST-scale and bit less convincing but still good results on STL-10.", "Overall, I think this is a nice paper with generally good explanations and experiments probing the behavior.", "I would have liked to see more probing into the effects of number and distance between scales.", "Table 1 and corresponding text say that a significant advantage of the approach is that it can handle arbitrary scale values, but there was no explicit exploration of the effects of using this beyond one set of scales per experiment/dataset.", "What scale values can be sampled, which work best, and why?", "Also, while the MNIST-scale experiment seems convincing, I think the STL-10 is a bit less (but still OK): Although the method outperforms other methods and appropriate baseline models, it*s a little disappointing that pooling over scales (which I would would convert the equivariance to invariance) is best, and inter-scale interactions increase error.", "(Perhaps this is not too surprising in retrospect, as images may have limited scale variation from camera position in this dataset, but significant within-class viewpoint variation.)", "Even so, I still find the method concise and of interest, with the basics evaluated, even if some of its unique advantages may have been better explored.", "Additional Questions: * Inter-scale interaction could be elaborated a bit more.", "End of sec 4 says, *use convHH for each scale sequentially and .. sum*.", "I believe this is sequencing over scales in the kernel; explaining a bit better how this is implemented, including the shape of w in this case, would be helpful.", "* Which scales were chosen for the fixed basis?", "How large in spatial extent are the kernels in the basis elements, at each scale?", "* In the implementation, what is the value of V (sampled 2d conv kernel size)?" ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 17, "praise": 4, "presentation_and_reporting": 5, "results_and_discussion": 5, "suggestion_and_solution": 5, "total": 17 }
0.235294
0.058824
0.117647
1
0.235294
0.294118
0.294118
0.294118
17
1
1
1
1
1
1
1
1
2.529412
2.276893
0.252519
Differentiable learning of numerical rules in knowledge graphs
Rules over a knowledge graph (KG) capture interpretable patterns in data and can be used for KG cleaning and completion. Inspired by the TensorLog differentiable logic framework, which compiles rule inference into a sequence of differentiable operations, recently a method called Neural LP has been proposed for learning the parameters as well as the structure of rules. However, it is limited with respect to the treatment of numerical features like age, weight or scientific measurements. We address this limitation by extending Neural LP to learn rules with numerical values, e.g., “People younger than 18 typically live with their parents”. We demonstrate how dynamic programming and cumulative sum operations can be exploited to ensure efficiency of such extension. Our novel approach allows us to extract more expressive rules with aggregates, which are of higher quality and yield more accurate predictions compared to rules learned by the state-of-the-art methods, as shown by our experiments on synthetic and real-world datasets.
This paper proposes an interesting extension to the Neural LP framework for learning numerical rules in knowledge graphs. The proposed method can handle predicates involving the comparison of the numerical attribute values. The authors demonstrate its effectiveness on both synthetic knowledge graphs and the parts of existing knowledge graphs which consider numerical values. I recommend the paper to be rejected in its current form for the following 3 reasons: (1) Although the idea of making numerical rules differentiable is interesting, the current proposed method can only deal with one form of numerical predicate, which is numerical comparison. The limitation to such a special case makes the paper somewhat incremental. (2) The paper does not do a great job of convincing the reader that the problem it is trying to solve is an important matter, or the proposed method is indeed effective in some applications. Although the proposed method does a good job in synthetic experiments, outperforming existing methods by a large margin, its performance on the numerical variants of Freebase/DBPedia dataset does not show consistent significant improvement. The authors should try to find a real-world domain which can really demonstrate the effectiveness of the method. (3) The experiment section lacks more detailed analysis which can intuitively explain how well the proposed method performs on the benchmarks. A good place to start with is to visualize(print out) the learned numerical rules and see if they make any sense. The experiment section needs significant improvement, especially when there is space left. The authors can consider improving the paper based on the above drawbacks. I encourage the authors to re-submit the paper once it*s improved.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDifferentiable learning of numerical rules in knowledge graphs\nRules over a knowledge graph (KG) capture interpretable patterns in data and can be used for KG cleaning and completion. Inspired by the TensorLog differentiable logic framework, which compiles rule inference into a sequence of differentiable operations, recently a method called Neural LP has been proposed for learning the parameters as well as the structure of rules. However, it is limited with respect to the treatment of numerical features like age, weight or scientific measurements. We address this limitation by extending Neural LP to learn rules with numerical values, e.g., “People younger than 18 typically live with their parents”. We demonstrate how dynamic programming and cumulative sum operations can be exploited to ensure efficiency of such extension. Our novel approach allows us to extract more expressive rules with aggregates, which are of higher quality and yield more accurate predictions compared to rules learned by the state-of-the-art methods, as shown by our experiments on synthetic and real-world datasets.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes an interesting extension to the Neural LP framework for learning numerical rules in knowledge graphs. The proposed method can handle predicates involving the comparison of the numerical attribute values. The authors demonstrate its effectiveness on both synthetic knowledge graphs and the parts of existing knowledge graphs which consider numerical values. I recommend the paper to be rejected in its current form for the following 3 reasons: (1) Although the idea of making numerical rules differentiable is interesting, the current proposed method can only deal with one form of numerical predicate, which is numerical comparison. The limitation to such a special case makes the paper somewhat incremental. (2) The paper does not do a great job of convincing the reader that the problem it is trying to solve is an important matter, or the proposed method is indeed effective in some applications. Although the proposed method does a good job in synthetic experiments, outperforming existing methods by a large margin, its performance on the numerical variants of Freebase/DBPedia dataset does not show consistent significant improvement. The authors should try to find a real-world domain which can really demonstrate the effectiveness of the method. (3) The experiment section lacks more detailed analysis which can intuitively explain how well the proposed method performs on the benchmarks. A good place to start with is to visualize(print out) the learned numerical rules and see if they make any sense. The experiment section needs significant improvement, especially when there is space left. The authors can consider improving the paper based on the above drawbacks. I encourage the authors to re-submit the paper once it*s improved.", "role": "assistant" } ]
[ "This paper proposes an interesting extension to the Neural LP framework for learning numerical rules in knowledge graphs.", "The proposed method can handle predicates involving the comparison of the numerical attribute values.", "The authors demonstrate its effectiveness on both synthetic knowledge graphs and the parts of existing knowledge graphs which consider numerical values.", "I recommend the paper to be rejected in its current form for the following 3 reasons: (1) Although the idea of making numerical rules differentiable is interesting, the current proposed method can only deal with one form of numerical predicate, which is numerical comparison.", "The limitation to such a special case makes the paper somewhat incremental.", "(2) The paper does not do a great job of convincing the reader that the problem it is trying to solve is an important matter, or the proposed method is indeed effective in some applications.", "Although the proposed method does a good job in synthetic experiments, outperforming existing methods by a large margin, its performance on the numerical variants of Freebase/DBPedia dataset does not show consistent significant improvement.", "The authors should try to find a real-world domain which can really demonstrate the effectiveness of the method.", "(3) The experiment section lacks more detailed analysis which can intuitively explain how well the proposed method performs on the benchmarks.", "A good place to start with is to visualize(print out) the learned numerical rules and see if they make any sense.", "The experiment section needs significant improvement, especially when there is space left.", "The authors can consider improving the paper based on the above drawbacks.", "I encourage the authors to re-submit the paper once it*s improved." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 9, "praise": 6, "presentation_and_reporting": 2, "results_and_discussion": 0, "suggestion_and_solution": 6, "total": 13 }
0.307692
0
0.230769
0.692308
0.461538
0.153846
0
0.461538
13
1
0
1
1
1
1
0
1
2.307692
2.307692
0
Learning to Move with Affordance Maps
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles. Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects (such as other agents) or semantic constraints (such as wet floors or doorways). Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret. In this paper, we combine the best of both worlds with a modular approach that {em learns} a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners. Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering. In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards. We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
This paper presents an approach for navigating and exploring in environments with dynamic and environmental hazards that combines geometric and semantic affordance information in a map used for path planning. Overall this paper is fairly well written. Results in a VizDoom testbed show favorable performance compared to both frontier and RL baselines, and the author*s approach is more sample-ef?cient and generalizable than RL-based approaches. I wouldn*t consider any particular aspect of this paper to be that novel, but it is a nice combination of leveraging active self-supervised learning to generate spatial affordance information for fusion with a geometric planner. As humans show the best performance on the tasks, it might be worth considering learning a policy from human demonstrations through an imitation learning approach.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning to Move with Affordance Maps\nThe ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles. Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects (such as other agents) or semantic constraints (such as wet floors or doorways). Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret. In this paper, we combine the best of both worlds with a modular approach that {em learns} a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners. Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering. In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards. We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents an approach for navigating and exploring in environments with dynamic and environmental hazards that combines geometric and semantic affordance information in a map used for path planning. Overall this paper is fairly well written. Results in a VizDoom testbed show favorable performance compared to both frontier and RL baselines, and the author*s approach is more sample-ef?cient and generalizable than RL-based approaches. I wouldn*t consider any particular aspect of this paper to be that novel, but it is a nice combination of leveraging active self-supervised learning to generate spatial affordance information for fusion with a geometric planner. As humans show the best performance on the tasks, it might be worth considering learning a policy from human demonstrations through an imitation learning approach.", "role": "assistant" } ]
[ "This paper presents an approach for navigating and exploring in environments with dynamic and environmental hazards that combines geometric and semantic affordance information in a map used for path planning.", "Overall this paper is fairly well written.", "Results in a VizDoom testbed show favorable performance compared to both frontier and RL baselines, and the author*s approach is more sample-ef?cient and generalizable than RL-based approaches.", "I wouldn*t consider any particular aspect of this paper to be that novel, but it is a nice combination of leveraging active self-supervised learning to generate spatial affordance information for fusion with a geometric planner.", "As humans show the best performance on the tasks, it might be worth considering learning a policy from human demonstrations through an imitation learning approach." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 2, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 5 }
0
0
0.4
0.4
0.8
0.2
0.2
0.2
5
0
0
1
1
1
1
1
1
2.2
1.189924
1.010076
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such applications, which require a lot of time, money, and effort to develop. Existing defenses take a passive role against stealing attacks, such as by truncating predicted information. We find such passive defenses ineffective against DNN stealing attacks. In this paper, we propose the first defense which actively perturbs predictions targeted at poisoning the training objective of the attacker. We find our defense effective across a wide range of challenging datasets and DNN model stealing attacks, and additionally outperforms existing defenses. Our defense is the first that can withstand highly accurate model stealing attacks for tens of thousands of queries, amplifying the attacker*s error rate up to a factor of 85 with minimal impact on the utility for benign users.
This paper aims at defending against model stealing attacks by perturbing the posterior prediction of a protected DNN with a balanced goal of maintaining accuracy and maximizing misleading gradient deviation. The maximizing angular deviation formulation makes sense and seemingly correct. The heuristic solver toward this objective is shown to be relatively effective in the experiments. While the theoretical novelty of the method is limited, the application in adversarial settings may be useful to advance of this research field, especially when it is relatively easy to apply by practitioners.I recommend toward acceptance of this paper even though can be convinced otherwise by better field experts.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPrediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks\nHigh-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such applications, which require a lot of time, money, and effort to develop. Existing defenses take a passive role against stealing attacks, such as by truncating predicted information. We find such passive defenses ineffective against DNN stealing attacks. In this paper, we propose the first defense which actively perturbs predictions targeted at poisoning the training objective of the attacker. We find our defense effective across a wide range of challenging datasets and DNN model stealing attacks, and additionally outperforms existing defenses. Our defense is the first that can withstand highly accurate model stealing attacks for tens of thousands of queries, amplifying the attacker*s error rate up to a factor of 85 with minimal impact on the utility for benign users.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper aims at defending against model stealing attacks by perturbing the posterior prediction of a protected DNN with a balanced goal of maintaining accuracy and maximizing misleading gradient deviation. The maximizing angular deviation formulation makes sense and seemingly correct. The heuristic solver toward this objective is shown to be relatively effective in the experiments. While the theoretical novelty of the method is limited, the application in adversarial settings may be useful to advance of this research field, especially when it is relatively easy to apply by practitioners.I recommend toward acceptance of this paper even though can be convinced otherwise by better field experts.", "role": "assistant" } ]
[ "This paper aims at defending against model stealing attacks by perturbing the posterior prediction of a protected DNN with a balanced goal of maintaining accuracy and maximizing misleading gradient deviation.", "The maximizing angular deviation formulation makes sense and seemingly correct.", "The heuristic solver toward this objective is shown to be relatively effective in the experiments.", "While the theoretical novelty of the method is limited, the application in adversarial settings may be useful to advance of this research field, especially when it is relatively easy to apply by practitioners.I recommend toward acceptance of this paper even though can be convinced otherwise by better field experts." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 4, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 4 }
0
0
0.25
1
0.5
0
0.75
0.25
4
0
0
1
1
1
0
1
1
2.75
1.471623
1.278377
Domain Adaptive Multiflow Networks
We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition. To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others. This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared. As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy. Furthermore, it allows us to handle any number of domains simultaneously.
The paper provides an unsupervised domain adaptation approach in the context of deep learning. The motivation is clear, related work sufficient and experimental settings and results convincing. I have only very minor comments: - I would prefer to get the paper additionally linked to a few more transfer learning techniques out of the deep learning domain which is important as well - do you really need to call it (multi) flow network .... - a flow network is a well established concept in algorithmics and refers to a graph problem ... to avoid name clashes ... - in the references you have provided back links to the pages where the references are used - this is handy but also confusing and a bit unusual - I think it was not part of the standard template - please avoid using arxiv references but replace them by reviewed material. In parts I am willing to accept such kind of gray literature provided by well known authors but this should not become a standard habit - I am happy to see that the code will be published - I hope this is really done, because from the material it maybe hard to reconstruct the method
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDomain Adaptive Multiflow Networks\nWe tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition. To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others. This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared. As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy. Furthermore, it allows us to handle any number of domains simultaneously.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper provides an unsupervised domain adaptation approach in the context of deep learning. The motivation is clear, related work sufficient and experimental settings and results convincing. I have only very minor comments: - I would prefer to get the paper additionally linked to a few more transfer learning techniques out of the deep learning domain which is important as well - do you really need to call it (multi) flow network .... - a flow network is a well established concept in algorithmics and refers to a graph problem ... to avoid name clashes ... - in the references you have provided back links to the pages where the references are used - this is handy but also confusing and a bit unusual - I think it was not part of the standard template - please avoid using arxiv references but replace them by reviewed material. In parts I am willing to accept such kind of gray literature provided by well known authors but this should not become a standard habit - I am happy to see that the code will be published - I hope this is really done, because from the material it maybe hard to reconstruct the method", "role": "assistant" } ]
[ "The paper provides an unsupervised domain adaptation approach in the context of deep learning.", "The motivation is clear, related work sufficient and experimental settings and results convincing.", "I have only very minor comments: - I would prefer to get the paper additionally linked to a few more transfer learning techniques out of the deep learning domain which is important as well - do you really need to call it (multi) flow network .... - a flow network is a well established concept in algorithmics and refers to a graph problem ... to avoid name clashes ... - in the references you have provided back links to the pages where the references are used - this is handy but also confusing and a bit unusual - I think it was not part of the standard template - please avoid using arxiv references but replace them by reviewed material.", "In parts I am willing to accept such kind of gray literature provided by well known authors but this should not become a standard habit - I am happy to see that the code will be published - I hope this is really done, because from the material it maybe hard to reconstruct the method" ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 3, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 4 }
0.25
0
0.25
0.75
0.5
0.25
0.25
0.5
4
1
0
1
1
1
1
1
1
2.75
1.471623
1.278377
Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning
Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in the broad comparison, we introduce the deep ensemble equivalent (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of very few independently trained networks in terms of the test log-likelihood.
Summary: This paper mainly concerns the quality of in-domain uncertainty for image classification. After exploring common standards for uncertainty quantification, the authors point out pitfalls of existing metrics by investigating different ensembling techniques and introduce a novel metric called deep ensemble equivalent (DEE) that essentially measures the number of independent models in an ensemble of DNNs. Based on the DEE score, a detailed evaluation of modern DNN ensembles is performed on CIFAR-10/100 and ImageNet datasets. Strengths: The paper is well written and easy to follow. The relationship to previous works is also well described. Overall, I think this is a good paper, which gives a detailed overview of existing metrics for accessing the quality in in-domain uncertainty estimation. The idea behind the proposed DEE score is nice and simple, clearly showing the quality of different ensembling methods (in Fig. 3). Given the importance of uncertainty analysis to deep learning, I believe this work will have a positive impact on the community. Weaknesses: - On page 6, the authors mention that the prior in Eq. 3 is taken to be a Gaussian N(mu, diag(sigma^2)) for Bayesian neural networks, however, many other choices of a prior distribution are available in the literature. What is the impact of changing prior distributions on the quality of uncertainty estimates in the case of variational inference? - Data augmentation is commonly used for improving model performance. However, I find the results presented in Sect 4.3 are not clear enough, note that for a given ensembling method in Table 1, the negative calibrated log-likelihood may increase or decrease when using different networks (VGG, ResNet, etc.). I think it would be interesting to elaborate a bit more on the influence of model complexity. - On page 15, in Eq. 12, the choice of the variance parameter sigma_p^2=1/(N*wd) seems unclear and should be better explained. Minor comments: The size of some figures appears too small, for example Fig. 4 and Fig. 5, which may hinder readability.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning\nUncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in the broad comparison, we introduce the deep ensemble equivalent (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of very few independently trained networks in terms of the test log-likelihood.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary: This paper mainly concerns the quality of in-domain uncertainty for image classification. After exploring common standards for uncertainty quantification, the authors point out pitfalls of existing metrics by investigating different ensembling techniques and introduce a novel metric called deep ensemble equivalent (DEE) that essentially measures the number of independent models in an ensemble of DNNs. Based on the DEE score, a detailed evaluation of modern DNN ensembles is performed on CIFAR-10/100 and ImageNet datasets. Strengths: The paper is well written and easy to follow. The relationship to previous works is also well described. Overall, I think this is a good paper, which gives a detailed overview of existing metrics for accessing the quality in in-domain uncertainty estimation. The idea behind the proposed DEE score is nice and simple, clearly showing the quality of different ensembling methods (in Fig. 3). Given the importance of uncertainty analysis to deep learning, I believe this work will have a positive impact on the community. Weaknesses: - On page 6, the authors mention that the prior in Eq. 3 is taken to be a Gaussian N(mu, diag(sigma^2)) for Bayesian neural networks, however, many other choices of a prior distribution are available in the literature. What is the impact of changing prior distributions on the quality of uncertainty estimates in the case of variational inference? - Data augmentation is commonly used for improving model performance. However, I find the results presented in Sect 4.3 are not clear enough, note that for a given ensembling method in Table 1, the negative calibrated log-likelihood may increase or decrease when using different networks (VGG, ResNet, etc.). I think it would be interesting to elaborate a bit more on the influence of model complexity. - On page 15, in Eq. 12, the choice of the variance parameter sigma_p^2=1/(N*wd) seems unclear and should be better explained. Minor comments: The size of some figures appears too small, for example Fig. 4 and Fig. 5, which may hinder readability.", "role": "assistant" } ]
[ "Summary: This paper mainly concerns the quality of in-domain uncertainty for image classification.", "After exploring common standards for uncertainty quantification, the authors point out pitfalls of existing metrics by investigating different ensembling techniques and introduce a novel metric called deep ensemble equivalent (DEE) that essentially measures the number of independent models in an ensemble of DNNs.", "Based on the DEE score, a detailed evaluation of modern DNN ensembles is performed on CIFAR-10/100 and ImageNet datasets.", "Strengths: The paper is well written and easy to follow.", "The relationship to previous works is also well described.", "Overall, I think this is a good paper, which gives a detailed overview of existing metrics for accessing the quality in in-domain uncertainty estimation.", "The idea behind the proposed DEE score is nice and simple, clearly showing the quality of different ensembling methods (in Fig.", "3).", "Given the importance of uncertainty analysis to deep learning, I believe this work will have a positive impact on the community.", "Weaknesses: - On page 6, the authors mention that the prior in Eq.", "3 is taken to be a Gaussian N(mu, diag(sigma^2)) for Bayesian neural networks, however, many other choices of a prior distribution are available in the literature.", "What is the impact of changing prior distributions on the quality of uncertainty estimates in the case of variational inference?", "- Data augmentation is commonly used for improving model performance.", "However, I find the results presented in Sect 4.3 are not clear enough, note that for a given ensembling method in Table 1, the negative calibrated log-likelihood may increase or decrease when using different networks (VGG, ResNet, etc.).", "I think it would be interesting to elaborate a bit more on the influence of model complexity.", "- On page 15, in Eq.", "12, the choice of the variance parameter sigma_p^2=1/(N*wd) seems unclear and should be better explained.", "Minor comments: The size of some figures appears too small, for example Fig.", "4 and Fig.", "5, which may hinder readability." ]
{ "criticism": 4, "example": 5, "importance_and_relevance": 2, "materials_and_methods": 12, "praise": 5, "presentation_and_reporting": 8, "results_and_discussion": 4, "suggestion_and_solution": 3, "total": 20 }
0.2
0.25
0.1
0.6
0.25
0.4
0.2
0.15
20
1
1
1
1
1
1
1
1
2.15
1.376661
0.773339
Episodic Reinforcement Learning with Associative Memory
Sample efficiency has been one of the major challenges for deep reinforcement learning. Non-parametric episodic control has been proposed to speed up parametric reinforcement learning by rapidly latching on previously successful policies. However, previous work on episodic reinforcement learning neglects the relationship between states and only stored the experiences as unrelated items. To improve sample efficiency of reinforcement learning, we propose a novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies. We build a graph on top of states in memory based on state transitions and develop an efficient reverse-trajectory propagation strategy to allow rapid value propagation through the graph. We use the non-parametric associative memory as early guidance for a parametric reinforcement learning model. Results on Atari games show that our framework has significantly higher sample efficiency and outperforms state-of-the-art episodic reinforcement learning models.
The paper proposes a new method for organizing episodic memory in with deep Q-networks. It organizes the memory as a graph in which nodes are internal representations of observed states and edges link state transitions. Additionally, nodes from different episodes are merged into a single node if they represent the same state, allowing for inter-episode value propagation of stored rewards. The authors experimentally evaluate the method against reasonable baselines and show improved performance in the majority of tasks. While the proposed method seemingly leads to better performance, the analysis of the results appears superficial. First, it lacks theoretical rigor to explain why the proposed propagation mechanism works, for instance, a proof of the optimal substructure in Eq. (3) would be helpful. While I see the authors mentioned previous work which do not have optimal substructures and mention this for the case of navigation-like tasks, the matter is not discussed and unclear in other scenarios. This in itself would not be much of an issue if the experiments highlighted advantages and limitations of the proposed method, but that is not the case. For instance, per-task comparisons to EVA (mentioned in the paper) could indicate how useful inter-episode value propagation was in each task. Table 1 mentions EVA, but only on aggregated results, thus not providing insight into this. Furthermore, the experiments selected to be detailed in Figure 4 are, in my opinion, suboptimal choices, as the worst-performing of the selected tasks is still better than the baselines. It would be more insightful to also compare those whose performance is close to the baselines, such as the Boxing environment, and significantly worse than baselines, such as the StarGunner environment (as shown in Figure 3). With those issues in mind, my conclusion is to recommend to reject the paper in the current state. Minor comments: - Thoroughly review the writing and grammar, the text in its current state needs significant improvement in this regard - Equation references missing parentheses - Introduction, 1st paragraph, second sentence is incomplete - Algorithm 2: tuples taken from/appended to sets G and D are not consistent (cf. lines 11 and 13) - Figure 3: The large amount of bars in the plot would benefit from horizontal lines across the plot for each tick on the y axis - Table 1: caption before the table; should also state the meaning of numbers (percentage or normalized score, calculated from what?) - Figure 4: labels (all text except plot titles) are impossible to read in print - Section 3 could be placed before Section 2, laying the mathematical framework, and then following the discussion with related work - There is redundant content in Sections 2 and 3 ---- I am happy with the authors response and changed the score accordingly
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEpisodic Reinforcement Learning with Associative Memory\nSample efficiency has been one of the major challenges for deep reinforcement learning. Non-parametric episodic control has been proposed to speed up parametric reinforcement learning by rapidly latching on previously successful policies. However, previous work on episodic reinforcement learning neglects the relationship between states and only stored the experiences as unrelated items. To improve sample efficiency of reinforcement learning, we propose a novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies. We build a graph on top of states in memory based on state transitions and develop an efficient reverse-trajectory propagation strategy to allow rapid value propagation through the graph. We use the non-parametric associative memory as early guidance for a parametric reinforcement learning model. Results on Atari games show that our framework has significantly higher sample efficiency and outperforms state-of-the-art episodic reinforcement learning models.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a new method for organizing episodic memory in with deep Q-networks. It organizes the memory as a graph in which nodes are internal representations of observed states and edges link state transitions. Additionally, nodes from different episodes are merged into a single node if they represent the same state, allowing for inter-episode value propagation of stored rewards. The authors experimentally evaluate the method against reasonable baselines and show improved performance in the majority of tasks. While the proposed method seemingly leads to better performance, the analysis of the results appears superficial. First, it lacks theoretical rigor to explain why the proposed propagation mechanism works, for instance, a proof of the optimal substructure in Eq. (3) would be helpful. While I see the authors mentioned previous work which do not have optimal substructures and mention this for the case of navigation-like tasks, the matter is not discussed and unclear in other scenarios. This in itself would not be much of an issue if the experiments highlighted advantages and limitations of the proposed method, but that is not the case. For instance, per-task comparisons to EVA (mentioned in the paper) could indicate how useful inter-episode value propagation was in each task. Table 1 mentions EVA, but only on aggregated results, thus not providing insight into this. Furthermore, the experiments selected to be detailed in Figure 4 are, in my opinion, suboptimal choices, as the worst-performing of the selected tasks is still better than the baselines. It would be more insightful to also compare those whose performance is close to the baselines, such as the Boxing environment, and significantly worse than baselines, such as the StarGunner environment (as shown in Figure 3). With those issues in mind, my conclusion is to recommend to reject the paper in the current state. Minor comments: - Thoroughly review the writing and grammar, the text in its current state needs significant improvement in this regard - Equation references missing parentheses - Introduction, 1st paragraph, second sentence is incomplete - Algorithm 2: tuples taken from/appended to sets G and D are not consistent (cf. lines 11 and 13) - Figure 3: The large amount of bars in the plot would benefit from horizontal lines across the plot for each tick on the y axis - Table 1: caption before the table; should also state the meaning of numbers (percentage or normalized score, calculated from what?) - Figure 4: labels (all text except plot titles) are impossible to read in print - Section 3 could be placed before Section 2, laying the mathematical framework, and then following the discussion with related work - There is redundant content in Sections 2 and 3 ---- I am happy with the authors response and changed the score accordingly", "role": "assistant" } ]
[ "The paper proposes a new method for organizing episodic memory in with deep Q-networks.", "It organizes the memory as a graph in which nodes are internal representations of observed states and edges link state transitions.", "Additionally, nodes from different episodes are merged into a single node if they represent the same state, allowing for inter-episode value propagation of stored rewards.", "The authors experimentally evaluate the method against reasonable baselines and show improved performance in the majority of tasks.", "While the proposed method seemingly leads to better performance, the analysis of the results appears superficial.", "First, it lacks theoretical rigor to explain why the proposed propagation mechanism works, for instance, a proof of the optimal substructure in Eq.", "(3) would be helpful.", "While I see the authors mentioned previous work which do not have optimal substructures and mention this for the case of navigation-like tasks, the matter is not discussed and unclear in other scenarios.", "This in itself would not be much of an issue if the experiments highlighted advantages and limitations of the proposed method, but that is not the case.", "For instance, per-task comparisons to EVA (mentioned in the paper) could indicate how useful inter-episode value propagation was in each task.", "Table 1 mentions EVA, but only on aggregated results, thus not providing insight into this.", "Furthermore, the experiments selected to be detailed in Figure 4 are, in my opinion, suboptimal choices, as the worst-performing of the selected tasks is still better than the baselines.", "It would be more insightful to also compare those whose performance is close to the baselines, such as the Boxing environment, and significantly worse than baselines, such as the StarGunner environment (as shown in Figure 3).", "With those issues in mind, my conclusion is to recommend to reject the paper in the current state.", "Minor comments: - Thoroughly review the writing and grammar, the text in its current state needs significant improvement in this regard - Equation references missing parentheses - Introduction, 1st paragraph, second sentence is incomplete - Algorithm 2: tuples taken from/appended to sets G and D are not consistent (cf.", "lines 11 and 13) - Figure 3: The large amount of bars in the plot would benefit from horizontal lines across the plot for each tick on the y axis - Table 1: caption before the table; should also state the meaning of numbers (percentage or normalized score, calculated from what?)", "- Figure 4: labels (all text except plot titles) are impossible to read in print - Section 3 could be placed before Section 2, laying the mathematical framework, and then following the discussion with related work - There is redundant content in Sections 2 and 3 ---- I am happy with the authors response and changed the score accordingly" ]
{ "criticism": 6, "example": 4, "importance_and_relevance": 1, "materials_and_methods": 9, "praise": 0, "presentation_and_reporting": 4, "results_and_discussion": 6, "suggestion_and_solution": 7, "total": 17 }
0.352941
0.235294
0.058824
0.529412
0
0.235294
0.352941
0.411765
17
1
1
1
1
0
1
1
1
2.176471
1.923952
0.252519
Deep Orientation Uncertainty Learning based on a Bingham Loss
Reasoning about uncertain orientations is one of the core problems in many perception tasks such as object pose estimation or motion estimation. In these scenarios, poor illumination conditions, sensor limitations, or appearance invariance may result in highly uncertain estimates. In this work, we propose a novel learning based representation for orientation uncertainty. Characterizing uncertainty over unit quaternions with the Bingham distribution allows us to formulate a loss that naturally captures the antipodal symmetry of the representation. We discuss the interpretability of the learned distribution parameters and demonstrate the feasibility of our approach on several challenging real-world pose estimation tasks involving uncertain orientations.
The paper proposes a Brigham loss (based on the Brigham distribution) to model the uncertainty of orientations (an important factor for pose estimation and other tasks). This distribution has the necessary characteristics required to represent orientation uncertainty using quaternions (one way to represent object orientation in 3D) such as antipodal symmetry. The authors propose various additions such as using precomputed lookup tables to represent a simplified version of the normalization constant (to make it computationally tractable), and the use of Expected Absolute Angular Deviation (EAAD) to make the uncertainty of the Bingham distribution more interpretable. +Uncertainty quantification of neural networks is an important problem that I believe should gain more attention so I am happy to see papers such as this one. +Various experiments on multiple datasets show the efficacy of the method as well as out performing or showing comparable results to state-of-the-art -In the caption for Table 1 the author’s write: “the high likelihood and lower difference between EAAD and MAAD indicate that the Bingham loss better captures the underlying noise.” How much difference between EAAD and MAAD is considered significant and why? -In section 4.5 they write “While Von Mises performs better on the MAAD, we observe that there is a larger difference between the MAAD and EAAD values for the Von Mises distribution than the Bingham distribution. This indicates that the uncertainty estimates of the Von Mises distribution may be overconfident.” Same question as above. What amount of difference between MAAD and EAAD is considered significant and why?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Orientation Uncertainty Learning based on a Bingham Loss\nReasoning about uncertain orientations is one of the core problems in many perception tasks such as object pose estimation or motion estimation. In these scenarios, poor illumination conditions, sensor limitations, or appearance invariance may result in highly uncertain estimates. In this work, we propose a novel learning based representation for orientation uncertainty. Characterizing uncertainty over unit quaternions with the Bingham distribution allows us to formulate a loss that naturally captures the antipodal symmetry of the representation. We discuss the interpretability of the learned distribution parameters and demonstrate the feasibility of our approach on several challenging real-world pose estimation tasks involving uncertain orientations.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a Brigham loss (based on the Brigham distribution) to model the uncertainty of orientations (an important factor for pose estimation and other tasks). This distribution has the necessary characteristics required to represent orientation uncertainty using quaternions (one way to represent object orientation in 3D) such as antipodal symmetry. The authors propose various additions such as using precomputed lookup tables to represent a simplified version of the normalization constant (to make it computationally tractable), and the use of Expected Absolute Angular Deviation (EAAD) to make the uncertainty of the Bingham distribution more interpretable. +Uncertainty quantification of neural networks is an important problem that I believe should gain more attention so I am happy to see papers such as this one. +Various experiments on multiple datasets show the efficacy of the method as well as out performing or showing comparable results to state-of-the-art -In the caption for Table 1 the author’s write: “the high likelihood and lower difference between EAAD and MAAD indicate that the Bingham loss better captures the underlying noise.” How much difference between EAAD and MAAD is considered significant and why? -In section 4.5 they write “While Von Mises performs better on the MAAD, we observe that there is a larger difference between the MAAD and EAAD values for the Von Mises distribution than the Bingham distribution. This indicates that the uncertainty estimates of the Von Mises distribution may be overconfident.” Same question as above. What amount of difference between MAAD and EAAD is considered significant and why?", "role": "assistant" } ]
[ "The paper proposes a Brigham loss (based on the Brigham distribution) to model the uncertainty of orientations (an important factor for pose estimation and other tasks).", "This distribution has the necessary characteristics required to represent orientation uncertainty using quaternions (one way to represent object orientation in 3D) such as antipodal symmetry.", "The authors propose various additions such as using precomputed lookup tables to represent a simplified version of the normalization constant (to make it computationally tractable), and the use of Expected Absolute Angular Deviation (EAAD) to make the uncertainty of the Bingham distribution more interpretable.", "+Uncertainty quantification of neural networks is an important problem that I believe should gain more attention so I am happy to see papers such as this one.", "+Various experiments on multiple datasets show the efficacy of the method as well as out performing or showing comparable results to state-of-the-art -In the caption for Table 1 the author’s write: “the high likelihood and lower difference between EAAD and MAAD indicate that the Bingham loss better captures the underlying noise.” How much difference between EAAD and MAAD is considered significant and why?", "-In section 4.5 they write “While Von Mises performs better on the MAAD, we observe that there is a larger difference between the MAAD and EAAD values for the Von Mises distribution than the Bingham distribution.", "This indicates that the uncertainty estimates of the Von Mises distribution may be overconfident.” Same question as above.", "What amount of difference between MAAD and EAAD is considered significant and why?" ]
{ "criticism": 0, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 5, "suggestion_and_solution": 2, "total": 8 }
0
0.125
0.25
0.75
0.125
0
0.625
0.25
8
0
1
1
1
1
0
1
1
2.125
1.730439
0.394561
Information Geometry of Orthogonal Initializations and Training
Recently mean field theory has been successfully used to analyze properties of wide, random neural networks. It gave rise to a prescriptive theory for initializing feed-forward neural networks with orthogonal weights, which ensures that both the forward propagated activations and the backpropagated gradients are near `2 isometries and as a consequence training is orders of magnitude faster. Despite strong empirical performance, the mechanisms by which critical initializations confer an advantage in the optimization of deep neural networks are poorly understood. Here we show a novel connection between the maximum curvature of the optimization landscape (gradient smoothness) as measured by the Fisher information matrix (FIM) and the spectral radius of the input-output Jacobian, which partially explains why more isometric networks can train much faster. Furthermore, given that orthogonal weights are necessary to ensure that gradient norms are approximately preserved at initialization, we experimentally investigate the benefits of maintaining orthogonality throughout training, and we conclude that manifold optimization of weights performs well regardless of the smoothness of the gradients. Moreover, we observe a surprising yet robust behavior of highly isometric initializations— even though such networks have a lower FIM condition number at initialization, and therefore by analogy to convex functions should be easier to optimize, experimentally they prove to be much harder to train with stochastic gradient descent. We propose an explanation for this phenomenon by exploting connections between Fisher geometry and the recently introduced Neural Tangent Kernel.
This paper analyzes the connection between the spectrum of the layer-to-layer or input-output Jacobian matrices and the spectrum of the Fisher information matrix / Neural Tangent Kernel. By bounding the maximum eigenvalue of the Fisher in terms of the maximum squared singular value of the input-output Jacobian, this paper provides a partial explanation for the successful initialization procedures obeying *dynamical isometry*. By additionally investigating optimization on the orthogonal weight manifold, this paper sheds light on the important of maintaining spectral uniformity throughout training. These two analyses help fill in important gaps in the understanding of initialization, dynamical isometry, and the training of deep neural networks. For these reasons I recommend this paper for acceptance. There are two aspects of the paper that could nevertheless use some clarification and improvement. First, unless I missed something, this paper does not provide any bounds on the condition number or the minimum eigenvalue of the Fisher or the NTK. It seems like the main arguments only depend on the maximum eigenvalues. Generally, I think the insights into the maximum eigenvalues are useful and important on their own, but perhaps some additional discussion up front clarifying which results were derived theoretically and which were observed empirically could be useful. Second, it should be noted that the the networks trained in the experiments are likely in a regime that is well outside the NTK regime, in two important ways: the dataset is large compared to the width and the optimal learning rates may be large as well. Overall, I think this is a good paper that adds important insights into the study of initialization, local geometry, and their effects on training speed.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInformation Geometry of Orthogonal Initializations and Training\nRecently mean field theory has been successfully used to analyze properties of wide, random neural networks. It gave rise to a prescriptive theory for initializing feed-forward neural networks with orthogonal weights, which ensures that both the forward propagated activations and the backpropagated gradients are near `2 isometries and as a consequence training is orders of magnitude faster. Despite strong empirical performance, the mechanisms by which critical initializations confer an advantage in the optimization of deep neural networks are poorly understood. Here we show a novel connection between the maximum curvature of the optimization landscape (gradient smoothness) as measured by the Fisher information matrix (FIM) and the spectral radius of the input-output Jacobian, which partially explains why more isometric networks can train much faster. Furthermore, given that orthogonal weights are necessary to ensure that gradient norms are approximately preserved at initialization, we experimentally investigate the benefits of maintaining orthogonality throughout training, and we conclude that manifold optimization of weights performs well regardless of the smoothness of the gradients. Moreover, we observe a surprising yet robust behavior of highly isometric initializations— even though such networks have a lower FIM condition number at initialization, and therefore by analogy to convex functions should be easier to optimize, experimentally they prove to be much harder to train with stochastic gradient descent. We propose an explanation for this phenomenon by exploting connections between Fisher geometry and the recently introduced Neural Tangent Kernel.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper analyzes the connection between the spectrum of the layer-to-layer or input-output Jacobian matrices and the spectrum of the Fisher information matrix / Neural Tangent Kernel. By bounding the maximum eigenvalue of the Fisher in terms of the maximum squared singular value of the input-output Jacobian, this paper provides a partial explanation for the successful initialization procedures obeying *dynamical isometry*. By additionally investigating optimization on the orthogonal weight manifold, this paper sheds light on the important of maintaining spectral uniformity throughout training. These two analyses help fill in important gaps in the understanding of initialization, dynamical isometry, and the training of deep neural networks. For these reasons I recommend this paper for acceptance. There are two aspects of the paper that could nevertheless use some clarification and improvement. First, unless I missed something, this paper does not provide any bounds on the condition number or the minimum eigenvalue of the Fisher or the NTK. It seems like the main arguments only depend on the maximum eigenvalues. Generally, I think the insights into the maximum eigenvalues are useful and important on their own, but perhaps some additional discussion up front clarifying which results were derived theoretically and which were observed empirically could be useful. Second, it should be noted that the the networks trained in the experiments are likely in a regime that is well outside the NTK regime, in two important ways: the dataset is large compared to the width and the optimal learning rates may be large as well. Overall, I think this is a good paper that adds important insights into the study of initialization, local geometry, and their effects on training speed.", "role": "assistant" } ]
[ "This paper analyzes the connection between the spectrum of the layer-to-layer or input-output Jacobian matrices and the spectrum of the Fisher information matrix / Neural Tangent Kernel.", "By bounding the maximum eigenvalue of the Fisher in terms of the maximum squared singular value of the input-output Jacobian, this paper provides a partial explanation for the successful initialization procedures obeying *dynamical isometry*.", "By additionally investigating optimization on the orthogonal weight manifold, this paper sheds light on the important of maintaining spectral uniformity throughout training.", "These two analyses help fill in important gaps in the understanding of initialization, dynamical isometry, and the training of deep neural networks.", "For these reasons I recommend this paper for acceptance.", "There are two aspects of the paper that could nevertheless use some clarification and improvement.", "First, unless I missed something, this paper does not provide any bounds on the condition number or the minimum eigenvalue of the Fisher or the NTK.", "It seems like the main arguments only depend on the maximum eigenvalues.", "Generally, I think the insights into the maximum eigenvalues are useful and important on their own, but perhaps some additional discussion up front clarifying which results were derived theoretically and which were observed empirically could be useful.", "Second, it should be noted that the the networks trained in the experiments are likely in a regime that is well outside the NTK regime, in two important ways: the dataset is large compared to the width and the optimal learning rates may be large as well.", "Overall, I think this is a good paper that adds important insights into the study of initialization, local geometry, and their effects on training speed." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 8, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 4, "suggestion_and_solution": 4, "total": 11 }
0.090909
0
0.363636
0.727273
0.363636
0.090909
0.363636
0.363636
11
1
0
1
1
1
1
1
1
2.363636
2.300507
0.06313
Robust Subspace Recovery Layer for Unsupervised Anomaly Detection
We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and removes outliers that lie away from this subspace. It is used within an autoencoder. The encoder maps the data into a latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a ``manifold* close to the original inliers. Inliers and outliers are distinguished according to the distances between the original and mapped positions (small for inliers and large for outliers). Extensive numerical experiments with both image and document datasets demonstrate state-of-the-art precision and recall.
This paper proposes to use the robust subspace recovery layer (RSR) in the autoencoder model for unsupervised anomaly detection. This paper is well written overall. Presentation is clear and it is easy to follow. The proposed approach is a simple combination of existing approaches. Although its theoretical analysis with respect to the performance of anomaly detection is limited, experiments show that the proposed method is effective and superior to the existing anomaly detection methods. I have the following comments: - Parameter sensitivity should be examined. The proposed method has the number of parameters including lambda_1, lambda_2, and parameters in neural networks. Since parameter tuning is fundamentally difficult in the unsupervised setting, the sensitivity of the proposed method with respect to changes of such parameters should be examined. - Since the efficiency is also an important issue for anomaly detection methods, runtime comparison would be interesting. - It would be also interesting whether the proposed method is also effective for non-structured data, where a dataset is given as just a set of (real-valued) feature vectors, with its comparison to the standard anomaly detection methods such as LOF and iForest.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRobust Subspace Recovery Layer for Unsupervised Anomaly Detection\nWe propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and removes outliers that lie away from this subspace. It is used within an autoencoder. The encoder maps the data into a latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a ``manifold* close to the original inliers. Inliers and outliers are distinguished according to the distances between the original and mapped positions (small for inliers and large for outliers). Extensive numerical experiments with both image and document datasets demonstrate state-of-the-art precision and recall.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to use the robust subspace recovery layer (RSR) in the autoencoder model for unsupervised anomaly detection. This paper is well written overall. Presentation is clear and it is easy to follow. The proposed approach is a simple combination of existing approaches. Although its theoretical analysis with respect to the performance of anomaly detection is limited, experiments show that the proposed method is effective and superior to the existing anomaly detection methods. I have the following comments: - Parameter sensitivity should be examined. The proposed method has the number of parameters including lambda_1, lambda_2, and parameters in neural networks. Since parameter tuning is fundamentally difficult in the unsupervised setting, the sensitivity of the proposed method with respect to changes of such parameters should be examined. - Since the efficiency is also an important issue for anomaly detection methods, runtime comparison would be interesting. - It would be also interesting whether the proposed method is also effective for non-structured data, where a dataset is given as just a set of (real-valued) feature vectors, with its comparison to the standard anomaly detection methods such as LOF and iForest.", "role": "assistant" } ]
[ "This paper proposes to use the robust subspace recovery layer (RSR) in the autoencoder model for unsupervised anomaly detection.", "This paper is well written overall.", "Presentation is clear and it is easy to follow.", "The proposed approach is a simple combination of existing approaches.", "Although its theoretical analysis with respect to the performance of anomaly detection is limited, experiments show that the proposed method is effective and superior to the existing anomaly detection methods.", "I have the following comments: - Parameter sensitivity should be examined.", "The proposed method has the number of parameters including lambda_1, lambda_2, and parameters in neural networks.", "Since parameter tuning is fundamentally difficult in the unsupervised setting, the sensitivity of the proposed method with respect to changes of such parameters should be examined.", "- Since the efficiency is also an important issue for anomaly detection methods, runtime comparison would be interesting.", "- It would be also interesting whether the proposed method is also effective for non-structured data, where a dataset is given as just a set of (real-valued) feature vectors, with its comparison to the standard anomaly detection methods such as LOF and iForest." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 8, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 10 }
0.1
0
0.1
0.8
0.4
0.2
0.3
0.4
10
1
0
1
1
1
1
1
1
2.3
2.157958
0.142042
Conservative Uncertainty Estimation By Fitting Prior Networks
Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks. In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution. Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm. We also show concentration, implying that the uncertainty estimates converge to zero as we get more data. Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines. We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice.
The paper shows that the MSE of a deep network trained to match fixed random network is a conservative estimate of uncertainty in expectation over many such network pairs. An experiment compares this previously proposed method to other approaches for uncertainty estimation on CIFAR 10. Strengths: - Obtaining uncertainty estimates for predictions of deep neural networks is an important and open research question. - Proposition 1 is an interesting result, although the paper does not seem to discuss its significance and implications enough. Weaknesses: - Proposition 1 is described as *our uncertainty estimates are never too small*. However, as the name of the proposition suggests, it seems to only hold in expectation over models trained, which is a quite different statement. - Proposition 2 seems to simply state that a small network can be distilled into a large network. Maybe I missed part of the reasoning here? Otherwise, it should probably not be highlighted as a major contribution. - The experimental evaluation is very limited, training only on CIFAR 10. While the experiments add little value to the community, this may be acceptable for a mostly theoretical paper. Clarity: - The paper was difficult to read and unclear in explanations. It could help to define notation upfront instead of introducing shorthands on the way. - In Figure 3, the Y axis limits should be fixed across seen and unseen histograms for the same method. The current presentation is a bit misleading here, as the presented method seems to have moved the most mass under this chart scaling. Comments: - As found in prior work cited in the submission, the method tends to perform well with just one network pair. This raises the question whether the contribution of the paper that holds in expectation over many pairs and the empirical success of the approach are connected. - The marginal posterior variance sigma^2(x_star) appears in various forms with hat, tilde, and different subscripts. It maybe worth assigning different letters to these to avoid confusion.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nConservative Uncertainty Estimation By Fitting Prior Networks\nObtaining high-quality uncertainty estimates is essential for many applications of deep neural networks. In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution. Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm. We also show concentration, implying that the uncertainty estimates converge to zero as we get more data. Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines. We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper shows that the MSE of a deep network trained to match fixed random network is a conservative estimate of uncertainty in expectation over many such network pairs. An experiment compares this previously proposed method to other approaches for uncertainty estimation on CIFAR 10. Strengths: - Obtaining uncertainty estimates for predictions of deep neural networks is an important and open research question. - Proposition 1 is an interesting result, although the paper does not seem to discuss its significance and implications enough. Weaknesses: - Proposition 1 is described as *our uncertainty estimates are never too small*. However, as the name of the proposition suggests, it seems to only hold in expectation over models trained, which is a quite different statement. - Proposition 2 seems to simply state that a small network can be distilled into a large network. Maybe I missed part of the reasoning here? Otherwise, it should probably not be highlighted as a major contribution. - The experimental evaluation is very limited, training only on CIFAR 10. While the experiments add little value to the community, this may be acceptable for a mostly theoretical paper. Clarity: - The paper was difficult to read and unclear in explanations. It could help to define notation upfront instead of introducing shorthands on the way. - In Figure 3, the Y axis limits should be fixed across seen and unseen histograms for the same method. The current presentation is a bit misleading here, as the presented method seems to have moved the most mass under this chart scaling. Comments: - As found in prior work cited in the submission, the method tends to perform well with just one network pair. This raises the question whether the contribution of the paper that holds in expectation over many pairs and the empirical success of the approach are connected. - The marginal posterior variance sigma^2(x_star) appears in various forms with hat, tilde, and different subscripts. It maybe worth assigning different letters to these to avoid confusion.", "role": "assistant" } ]
[ "The paper shows that the MSE of a deep network trained to match fixed random network is a conservative estimate of uncertainty in expectation over many such network pairs.", "An experiment compares this previously proposed method to other approaches for uncertainty estimation on CIFAR 10.", "Strengths: - Obtaining uncertainty estimates for predictions of deep neural networks is an important and open research question.", "- Proposition 1 is an interesting result, although the paper does not seem to discuss its significance and implications enough.", "Weaknesses: - Proposition 1 is described as *our uncertainty estimates are never too small*.", "However, as the name of the proposition suggests, it seems to only hold in expectation over models trained, which is a quite different statement.", "- Proposition 2 seems to simply state that a small network can be distilled into a large network.", "Maybe I missed part of the reasoning here?", "Otherwise, it should probably not be highlighted as a major contribution.", "- The experimental evaluation is very limited, training only on CIFAR 10.", "While the experiments add little value to the community, this may be acceptable for a mostly theoretical paper.", "Clarity: - The paper was difficult to read and unclear in explanations.", "It could help to define notation upfront instead of introducing shorthands on the way.", "- In Figure 3, the Y axis limits should be fixed across seen and unseen histograms for the same method.", "The current presentation is a bit misleading here, as the presented method seems to have moved the most mass under this chart scaling.", "Comments: - As found in prior work cited in the submission, the method tends to perform well with just one network pair.", "This raises the question whether the contribution of the paper that holds in expectation over many pairs and the empirical success of the approach are connected.", "- The marginal posterior variance sigma^2(x_star) appears in various forms with hat, tilde, and different subscripts.", "It maybe worth assigning different letters to these to avoid confusion." ]
{ "criticism": 5, "example": 1, "importance_and_relevance": 5, "materials_and_methods": 11, "praise": 2, "presentation_and_reporting": 6, "results_and_discussion": 7, "suggestion_and_solution": 5, "total": 19 }
0.263158
0.052632
0.263158
0.578947
0.105263
0.315789
0.368421
0.263158
19
1
1
1
1
1
1
1
1
2.210526
1.642359
0.568167
Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time-steps to achieve competitive accuracy which diminishes the energy savings. The number of time-steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time-steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron*s spike time. The weight update is proportional to the difference in spike timing between the current time-step and the most recent time-step the neuron generated an output spike. The SNNs trained with our hybrid conversion-and-STDB training perform at 10X-25X fewer number of time-steps and achieve similar accuracy compared to purely converted SNNs. The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100 and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time-steps, which is 10X faster compared to converted SNNs with similar accuracy.
This paper proposes methods to initialize and train spiking NNs (SNNs) as an alternative to ANNs, not driven primarily by improved loss or generalization, but by energy efficiency improvements derived from timing-event based sparse operation rather than asynchronous sweeps. The backpropagation method introduced is sensible, as are the experiments on known datasets to show its effectiveness. The paper is well written (apart from the miniscule Figure 3 containing the main result). I recommend acceptance, with caveats: the energy performance is actually not directly calculated, but speculatively estimated, it depends on the computational architecture chosen to implement the respective networks. I point out that ANNs need to be trained first to properly initialize an SNN, so the relative training effort claimed is less impressive, but energy performance does count in actual operational practice - training is (or should) be a small fraction of that.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation\nSpiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time-steps to achieve competitive accuracy which diminishes the energy savings. The number of time-steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time-steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron*s spike time. The weight update is proportional to the difference in spike timing between the current time-step and the most recent time-step the neuron generated an output spike. The SNNs trained with our hybrid conversion-and-STDB training perform at 10X-25X fewer number of time-steps and achieve similar accuracy compared to purely converted SNNs. The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100 and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time-steps, which is 10X faster compared to converted SNNs with similar accuracy.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes methods to initialize and train spiking NNs (SNNs) as an alternative to ANNs, not driven primarily by improved loss or generalization, but by energy efficiency improvements derived from timing-event based sparse operation rather than asynchronous sweeps. The backpropagation method introduced is sensible, as are the experiments on known datasets to show its effectiveness. The paper is well written (apart from the miniscule Figure 3 containing the main result). I recommend acceptance, with caveats: the energy performance is actually not directly calculated, but speculatively estimated, it depends on the computational architecture chosen to implement the respective networks. I point out that ANNs need to be trained first to properly initialize an SNN, so the relative training effort claimed is less impressive, but energy performance does count in actual operational practice - training is (or should) be a small fraction of that.", "role": "assistant" } ]
[ "This paper proposes methods to initialize and train spiking NNs (SNNs) as an alternative to ANNs, not driven primarily by improved loss or generalization, but by energy efficiency improvements derived from timing-event based sparse operation rather than asynchronous sweeps.", "The backpropagation method introduced is sensible, as are the experiments on known datasets to show its effectiveness.", "The paper is well written (apart from the miniscule Figure 3 containing the main result).", "I recommend acceptance, with caveats: the energy performance is actually not directly calculated, but speculatively estimated, it depends on the computational architecture chosen to implement the respective networks.", "I point out that ANNs need to be trained first to properly initialize an SNN, so the relative training effort claimed is less impressive, but energy performance does count in actual operational practice - training is (or should) be a small fraction of that." ]
{ "criticism": 0, "example": 1, "importance_and_relevance": 0, "materials_and_methods": 4, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 5 }
0
0.2
0
0.8
0.4
0.2
0.2
0.4
5
0
1
0
1
1
1
1
1
2.2
1.189924
1.010076
Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time-steps to achieve competitive accuracy which diminishes the energy savings. The number of time-steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time-steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron*s spike time. The weight update is proportional to the difference in spike timing between the current time-step and the most recent time-step the neuron generated an output spike. The SNNs trained with our hybrid conversion-and-STDB training perform at 10X-25X fewer number of time-steps and achieve similar accuracy compared to purely converted SNNs. The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100 and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time-steps, which is 10X faster compared to converted SNNs with similar accuracy.
This paper presents a fine-tuning method of models converted from standard encoding and SGD training to Spike/NN*s. The key point of the paper is that directly training S/NN*s with spike-back-prop is slow and inefficient, while directly inferencing with converted models is also inefficient due to the large integration window required to get a good estimate of the spiking neuron potential. The authors claim, and to a good extent show that, their proposed method is best of both worlds: train the models efficiently with standard encoding / SGD, this is something we know works and scale well, then convert and fine-tune with spike-backprop to get models that perform well under a shorter integration window, and thus are more efficient at inference time. The intuition is that models can achieve shorter integration windows while keeping good results because, under the assumptions made by the proposed algorithm, the fine-tuning is effectively unrolling neuron dynamics that can be trained with BPPT, in a way similar to LSTM/Recurrent models. In that case, since model dynamics are taken into account during fine-tuning, it results in better performance even under shorter time-windows. This is an interesting concept, since the training doesn*t only consider a mean-field estimate of the spike-activation, but it looks at spiking neuron dynamics with an higher granularity. The paper is well written, clear and easy to understand. Results are comparatively competitive and code is made available.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation\nSpiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time-steps to achieve competitive accuracy which diminishes the energy savings. The number of time-steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time-steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron*s spike time. The weight update is proportional to the difference in spike timing between the current time-step and the most recent time-step the neuron generated an output spike. The SNNs trained with our hybrid conversion-and-STDB training perform at 10X-25X fewer number of time-steps and achieve similar accuracy compared to purely converted SNNs. The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100 and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time-steps, which is 10X faster compared to converted SNNs with similar accuracy.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a fine-tuning method of models converted from standard encoding and SGD training to Spike/NN*s. The key point of the paper is that directly training S/NN*s with spike-back-prop is slow and inefficient, while directly inferencing with converted models is also inefficient due to the large integration window required to get a good estimate of the spiking neuron potential. The authors claim, and to a good extent show that, their proposed method is best of both worlds: train the models efficiently with standard encoding / SGD, this is something we know works and scale well, then convert and fine-tune with spike-backprop to get models that perform well under a shorter integration window, and thus are more efficient at inference time. The intuition is that models can achieve shorter integration windows while keeping good results because, under the assumptions made by the proposed algorithm, the fine-tuning is effectively unrolling neuron dynamics that can be trained with BPPT, in a way similar to LSTM/Recurrent models. In that case, since model dynamics are taken into account during fine-tuning, it results in better performance even under shorter time-windows. This is an interesting concept, since the training doesn*t only consider a mean-field estimate of the spike-activation, but it looks at spiking neuron dynamics with an higher granularity. The paper is well written, clear and easy to understand. Results are comparatively competitive and code is made available.", "role": "assistant" } ]
[ "This paper presents a fine-tuning method of models converted from standard encoding and SGD training to Spike/NN*s. The key point of the paper is that directly training S/NN*s with spike-back-prop is slow and inefficient, while directly inferencing with converted models is also inefficient due to the large integration window required to get a good estimate of the spiking neuron potential.", "The authors claim, and to a good extent show that, their proposed method is best of both worlds: train the models efficiently with standard encoding / SGD, this is something we know works and scale well, then convert and fine-tune with spike-backprop to get models that perform well under a shorter integration window, and thus are more efficient at inference time.", "The intuition is that models can achieve shorter integration windows while keeping good results because, under the assumptions made by the proposed algorithm, the fine-tuning is effectively unrolling neuron dynamics that can be trained with BPPT, in a way similar to LSTM/Recurrent models.", "In that case, since model dynamics are taken into account during fine-tuning, it results in better performance even under shorter time-windows.", "This is an interesting concept, since the training doesn*t only consider a mean-field estimate of the spike-activation, but it looks at spiking neuron dynamics with an higher granularity.", "The paper is well written, clear and easy to understand.", "Results are comparatively competitive and code is made available." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 6, "praise": 6, "presentation_and_reporting": 2, "results_and_discussion": 2, "suggestion_and_solution": 0, "total": 7 }
0
0
0.142857
0.857143
0.857143
0.285714
0.285714
0
7
0
0
1
1
1
1
1
0
2.428571
1.860404
0.568167
Massively Multilingual Sparse Word Representations
In this paper, we introduce Mamus for constructing multilingual sparse word representations. Our algorithm operates by determining a shared set of semantic units which get reutilized across languages, providing it a competitive edge both in terms of speed and evaluation performance. We demonstrate that our proposed algorithm behaves competitively to strong baselines through a series of rigorous experiments performed towards downstream applications spanning over dependency parsing, document classification and natural language inference. Additionally, our experiments relying on the QVEC-CCA evaluation score suggests that the proposed sparse word representations convey an increased interpretability as opposed to alternative approaches. Finally, we are releasing our multilingual sparse word representations for the 27 typologically diverse set of languages that we conducted our various experiments on.
This paper describes a method to build sparse multilingual word vectors that is designed to scale easily to many languages. The key idea is to pick one language to be the source language, and then to build word embeddings and then a sparse dictionary + sparse coefficients for that source language monolingually. Other target languages then first align their embedding to the source using a seed list of translations and standard techniques, and then determine their sparse coefficients based on the fixed source sparse dictionary. This latter process is a convex optimization, which improves efficiency and stability. The method is tested with an established correlation-based intrinsic metric (QVEC-CCA) as well as by using the multilingual embeddings to project systems for cross-lingual document classification, dependency parsing and natural language inference. The core idea of this paper is substantially simpler than the method it compares to (BiSparse), which jointly optimizes dictionaries, coefficients and cross-lingual coefficient alignment. So, the question that immediately comes to mind for me is, how much accuracy am I giving up for improved scalability to multiple languages? This could be easily tested for the two language case, where BiSparse could be directly compared without modification to their proposed method. Instead, BiSparse is modified to fit scale to the multilingual setting (becoming MultiSparse), but since it shares the constraint that the source dictionary and coefficients are fixed, it has already lost a lot of the power of joint optimization. I think the paper would be stronger with a two-language experiment where we would expect the proposed method to lose to BiSparse, but we could begin to understand what has been given up for scalability. I also wonder how relevant bilingual word embeddings are in a world of multilingual BERT and similar approaches. It would be interesting to know how cross-lingual embedding-in-context methods would do on the extrinsic evaluations in this paper, though I also acknowledge that this could be considered beyond the scope of the paper. Otherwise, this is a fine paper. It is well written and easy to follow. The experiments look sane, and the inclusion of both intrinsic and extrinsic tasks makes them fairly convincing. I have only a few remaining nitpicks: (1) As someone relatively unfamiliar with multilingual (as opposed to bilingual) word embedding research, it wasn’t clear to me how the experiments described here tested the multilinguality (as opposed to bilinguality) of the embeddings. It would be nice to provide an explanation for why (beyond the obvious efficiency gains) one couldn’t just do the necessary language pairs with bilingual methods for these experiments. And if one could perform the tests with bilingual methods, they should be included as baselines. (2) The discussion of MultiSpare hyper-parameter tuning appears in the Monolingual Experiments section, leading me to wonder what target languages were used for this tuning process. (3) In the second-last paragraph of 3.2.2, there is a sentence fragment that ends in “nonetheless their multiCluster and multiCCA embedding spaces contain no embeddings for” (4) The last paragraph before the Conclusion also feels like a fragment, or like two sentences have been spliced together: “We have detailed the differences to Upadhyay et al. (2018) extends the previous work by incorporating dependency relations into sparse coding.” There are also several places where periods seem to have been left out (such as immediately before the above sentence).
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMassively Multilingual Sparse Word Representations\nIn this paper, we introduce Mamus for constructing multilingual sparse word representations. Our algorithm operates by determining a shared set of semantic units which get reutilized across languages, providing it a competitive edge both in terms of speed and evaluation performance. We demonstrate that our proposed algorithm behaves competitively to strong baselines through a series of rigorous experiments performed towards downstream applications spanning over dependency parsing, document classification and natural language inference. Additionally, our experiments relying on the QVEC-CCA evaluation score suggests that the proposed sparse word representations convey an increased interpretability as opposed to alternative approaches. Finally, we are releasing our multilingual sparse word representations for the 27 typologically diverse set of languages that we conducted our various experiments on.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper describes a method to build sparse multilingual word vectors that is designed to scale easily to many languages. The key idea is to pick one language to be the source language, and then to build word embeddings and then a sparse dictionary + sparse coefficients for that source language monolingually. Other target languages then first align their embedding to the source using a seed list of translations and standard techniques, and then determine their sparse coefficients based on the fixed source sparse dictionary. This latter process is a convex optimization, which improves efficiency and stability. The method is tested with an established correlation-based intrinsic metric (QVEC-CCA) as well as by using the multilingual embeddings to project systems for cross-lingual document classification, dependency parsing and natural language inference. The core idea of this paper is substantially simpler than the method it compares to (BiSparse), which jointly optimizes dictionaries, coefficients and cross-lingual coefficient alignment. So, the question that immediately comes to mind for me is, how much accuracy am I giving up for improved scalability to multiple languages? This could be easily tested for the two language case, where BiSparse could be directly compared without modification to their proposed method. Instead, BiSparse is modified to fit scale to the multilingual setting (becoming MultiSparse), but since it shares the constraint that the source dictionary and coefficients are fixed, it has already lost a lot of the power of joint optimization. I think the paper would be stronger with a two-language experiment where we would expect the proposed method to lose to BiSparse, but we could begin to understand what has been given up for scalability. I also wonder how relevant bilingual word embeddings are in a world of multilingual BERT and similar approaches. It would be interesting to know how cross-lingual embedding-in-context methods would do on the extrinsic evaluations in this paper, though I also acknowledge that this could be considered beyond the scope of the paper. Otherwise, this is a fine paper. It is well written and easy to follow. The experiments look sane, and the inclusion of both intrinsic and extrinsic tasks makes them fairly convincing. I have only a few remaining nitpicks: (1) As someone relatively unfamiliar with multilingual (as opposed to bilingual) word embedding research, it wasn’t clear to me how the experiments described here tested the multilinguality (as opposed to bilinguality) of the embeddings. It would be nice to provide an explanation for why (beyond the obvious efficiency gains) one couldn’t just do the necessary language pairs with bilingual methods for these experiments. And if one could perform the tests with bilingual methods, they should be included as baselines. (2) The discussion of MultiSpare hyper-parameter tuning appears in the Monolingual Experiments section, leading me to wonder what target languages were used for this tuning process. (3) In the second-last paragraph of 3.2.2, there is a sentence fragment that ends in “nonetheless their multiCluster and multiCCA embedding spaces contain no embeddings for” (4) The last paragraph before the Conclusion also feels like a fragment, or like two sentences have been spliced together: “We have detailed the differences to Upadhyay et al. (2018) extends the previous work by incorporating dependency relations into sparse coding.” There are also several places where periods seem to have been left out (such as immediately before the above sentence).", "role": "assistant" } ]
[ "This paper describes a method to build sparse multilingual word vectors that is designed to scale easily to many languages.", "The key idea is to pick one language to be the source language, and then to build word embeddings and then a sparse dictionary + sparse coefficients for that source language monolingually.", "Other target languages then first align their embedding to the source using a seed list of translations and standard techniques, and then determine their sparse coefficients based on the fixed source sparse dictionary.", "This latter process is a convex optimization, which improves efficiency and stability.", "The method is tested with an established correlation-based intrinsic metric (QVEC-CCA) as well as by using the multilingual embeddings to project systems for cross-lingual document classification, dependency parsing and natural language inference.", "The core idea of this paper is substantially simpler than the method it compares to (BiSparse), which jointly optimizes dictionaries, coefficients and cross-lingual coefficient alignment.", "So, the question that immediately comes to mind for me is, how much accuracy am I giving up for improved scalability to multiple languages?", "This could be easily tested for the two language case, where BiSparse could be directly compared without modification to their proposed method.", "Instead, BiSparse is modified to fit scale to the multilingual setting (becoming MultiSparse), but since it shares the constraint that the source dictionary and coefficients are fixed, it has already lost a lot of the power of joint optimization.", "I think the paper would be stronger with a two-language experiment where we would expect the proposed method to lose to BiSparse, but we could begin to understand what has been given up for scalability.", "I also wonder how relevant bilingual word embeddings are in a world of multilingual BERT and similar approaches.", "It would be interesting to know how cross-lingual embedding-in-context methods would do on the extrinsic evaluations in this paper, though I also acknowledge that this could be considered beyond the scope of the paper.", "Otherwise, this is a fine paper.", "It is well written and easy to follow.", "The experiments look sane, and the inclusion of both intrinsic and extrinsic tasks makes them fairly convincing.", "I have only a few remaining nitpicks: (1) As someone relatively unfamiliar with multilingual (as opposed to bilingual) word embedding research, it wasn’t clear to me how the experiments described here tested the multilinguality (as opposed to bilinguality) of the embeddings.", "It would be nice to provide an explanation for why (beyond the obvious efficiency gains) one couldn’t just do the necessary language pairs with bilingual methods for these experiments.", "And if one could perform the tests with bilingual methods, they should be included as baselines.", "(2) The discussion of MultiSpare hyper-parameter tuning appears in the Monolingual Experiments section, leading me to wonder what target languages were used for this tuning process.", "(3) In the second-last paragraph of 3.2.2, there is a sentence fragment that ends in “nonetheless their multiCluster and multiCCA embedding spaces contain no embeddings for” (4) The last paragraph before the Conclusion also feels like a fragment, or like two sentences have been spliced together: “We have detailed the differences to Upadhyay et al.", "(2018) extends the previous work by incorporating dependency relations into sparse coding.” There are also several places where periods seem to have been left out (such as immediately before the above sentence)." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 14, "praise": 3, "presentation_and_reporting": 14, "results_and_discussion": 5, "suggestion_and_solution": 6, "total": 21 }
0.095238
0.047619
0.047619
0.666667
0.142857
0.666667
0.238095
0.285714
21
1
1
1
1
1
1
1
1
2.190476
1.180401
1.010076
Pay Attention to Features, Transfer Learn faster CNNs
Deep convolutional neural networks are now widely deployed in vision applications, but the size of training data can bottleneck their performance. Transfer learning offers the chance for CNNs to learn with limited data samples by transferring knowledge from weights pretrained on large datasets. On the other hand, blindly transferring all learned features from the source dataset brings unnecessary computation to CNNs on the target task. In this paper, we propose attentive feature distillation and selection (AFDS) that not only adjusts the strength of regularization introduced by transfer learning but also dynamically determines which are the important features to transfer. When deploying AFDS on ResNet-101, we achieve state-of-the-art computation reduction at the same accuracy budget, outperforming all existing transfer learning methods. On a 10x MACs reduction budget, transfer learned from ImageNet to Stanford Dogs 120, AFDS achieves an accuracy that is 12.51% higher than its best competitor.
In general, I think it is a good paper and I like the contribution of the author. I think they explain in detail the methodology. The results compare the new methodologies with different databases which increase the credibility of the results. However, there is a couple of additional question that is important to manage: 1) The paper presents three different contributions. However, it is so clear how this work helps for *By changing the fraction of channel neurons to skip for each convolution, AFDS can further accelerate the transfer learned models while minimizing the impact on task accuracy* I think a better explanation of this part it would be necessary. 2) The comparison of the results are very focused on AFDS, Did you compare the results with different transfer learning approach? 3) During the training procedure. We need a better explanation of why *we found that in residual networks with greater depths, AFS could become notably challenging to train to high accuracies*. Also, the results of the empirical test it would be useful to understand the challenges to train the network. 4) I think it would be useful to have the code available for the final version.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPay Attention to Features, Transfer Learn faster CNNs\nDeep convolutional neural networks are now widely deployed in vision applications, but the size of training data can bottleneck their performance. Transfer learning offers the chance for CNNs to learn with limited data samples by transferring knowledge from weights pretrained on large datasets. On the other hand, blindly transferring all learned features from the source dataset brings unnecessary computation to CNNs on the target task. In this paper, we propose attentive feature distillation and selection (AFDS) that not only adjusts the strength of regularization introduced by transfer learning but also dynamically determines which are the important features to transfer. When deploying AFDS on ResNet-101, we achieve state-of-the-art computation reduction at the same accuracy budget, outperforming all existing transfer learning methods. On a 10x MACs reduction budget, transfer learned from ImageNet to Stanford Dogs 120, AFDS achieves an accuracy that is 12.51% higher than its best competitor.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nIn general, I think it is a good paper and I like the contribution of the author. I think they explain in detail the methodology. The results compare the new methodologies with different databases which increase the credibility of the results. However, there is a couple of additional question that is important to manage: 1) The paper presents three different contributions. However, it is so clear how this work helps for *By changing the fraction of channel neurons to skip for each convolution, AFDS can further accelerate the transfer learned models while minimizing the impact on task accuracy* I think a better explanation of this part it would be necessary. 2) The comparison of the results are very focused on AFDS, Did you compare the results with different transfer learning approach? 3) During the training procedure. We need a better explanation of why *we found that in residual networks with greater depths, AFS could become notably challenging to train to high accuracies*. Also, the results of the empirical test it would be useful to understand the challenges to train the network. 4) I think it would be useful to have the code available for the final version.", "role": "assistant" } ]
[ "In general, I think it is a good paper and I like the contribution of the author.", "I think they explain in detail the methodology.", "The results compare the new methodologies with different databases which increase the credibility of the results.", "However, there is a couple of additional question that is important to manage: 1) The paper presents three different contributions.", "However, it is so clear how this work helps for *By changing the fraction of channel neurons to skip for each convolution, AFDS can further accelerate the transfer learned models while minimizing the impact on task accuracy* I think a better explanation of this part it would be necessary.", "2) The comparison of the results are very focused on AFDS, Did you compare the results with different transfer learning approach?", "3) During the training procedure.", "We need a better explanation of why *we found that in residual networks with greater depths, AFS could become notably challenging to train to high accuracies*.", "Also, the results of the empirical test it would be useful to understand the challenges to train the network.", "4) I think it would be useful to have the code available for the final version." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 8, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 5, "suggestion_and_solution": 4, "total": 10 }
0
0
0.3
0.8
0.1
0.2
0.5
0.4
10
0
0
1
1
1
1
1
1
2.3
2.157958
0.142042
WHITE NOISE ANALYSIS OF NEURAL NETWORKS
A white noise analysis of modern deep neural networks is presented to characterize their biases at the whole network level or the single neuron level. Our analysis is based on two popular and related methods in psychophysics and neurophysiology namely classification images and spike triggered analysis. These methods have been widely used to understand the underlying mechanisms of sensory systems in humans and monkeys. We leverage them to investigate the inherent biases of deep neural networks and to obtain a first-order approximation of their functionality. We emphasize on convolutional neural networks (CNNs) since they are currently the state of the art methods in computer vision and are a decent model of human visual processing. In addition, we study multi-layer perceptrons, logistic regression, and recurrent neural networks. Experiments over four classic datasets, MNIST, Fashion-MNIST, CIFAR-10, and ImageNet, show that the computed bias maps resemble the target classes and when used for classification lead to an over two-fold performance than the chance level. Further, we show that classification images can be used to attack a black-box classifier and to detect adversarial patch attacks. Finally, we utilize spike triggered averaging to derive the filters of CNNs and explore how the behavior of network changes when neurons in different layers are modulated. Our effort illustrates a successful example of borrowing from neuroscience to study ANNs and highlights the importance of cross-fertilization and synergy across machine learning, deep learning, and computational neuroscience.
This is an interesting paper that uses two techniques from neuroscience (eletrophysiology) to interpret CNNs. The *classification images* technique allows the authors to build an input specific bias map for CNNs, from different images that are randomly generated using different methods. STA is also used to visualize filters. The authors find interesting similarities between CNNs and animal/human brain such as the shift of the psychometric curve. The paper is well written and the experiments are thorough. This paper has comprehensive use of neuroscience method, nicely bringing the two fields together. Even though the authors mention other approaches, perhaps one weakness is the limited discussion of other visualization tools: what is the difference between other more frequent approaches such as activation maximization and the approach the authors present. What could we understand with these tools that cannot be understood through activation maximization (i.e. using backprop to find optimal stimuli)? One problem with classification images is that the pattern is limited to low level patterns unless the random images have structure in them. The authors solve this problem using more complex random sample (e.g. with Gabor features). But a more in depth discussion is necessarily about the additional understanding that is procured from such an approach in comparison to activation maximization. It*s an important discussion because even neuroscientists are now interested (and finally able) to properly perform activation maximization to test their hypotheses more strongly (see Bashivan et al 2019 science for a great example). Even though I think the lack of this discussion is currently a limitation, I think this paper is a good opportunity to have this discussion and bring together the specifics of these two different approaches.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWHITE NOISE ANALYSIS OF NEURAL NETWORKS\nA white noise analysis of modern deep neural networks is presented to characterize their biases at the whole network level or the single neuron level. Our analysis is based on two popular and related methods in psychophysics and neurophysiology namely classification images and spike triggered analysis. These methods have been widely used to understand the underlying mechanisms of sensory systems in humans and monkeys. We leverage them to investigate the inherent biases of deep neural networks and to obtain a first-order approximation of their functionality. We emphasize on convolutional neural networks (CNNs) since they are currently the state of the art methods in computer vision and are a decent model of human visual processing. In addition, we study multi-layer perceptrons, logistic regression, and recurrent neural networks. Experiments over four classic datasets, MNIST, Fashion-MNIST, CIFAR-10, and ImageNet, show that the computed bias maps resemble the target classes and when used for classification lead to an over two-fold performance than the chance level. Further, we show that classification images can be used to attack a black-box classifier and to detect adversarial patch attacks. Finally, we utilize spike triggered averaging to derive the filters of CNNs and explore how the behavior of network changes when neurons in different layers are modulated. Our effort illustrates a successful example of borrowing from neuroscience to study ANNs and highlights the importance of cross-fertilization and synergy across machine learning, deep learning, and computational neuroscience.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is an interesting paper that uses two techniques from neuroscience (eletrophysiology) to interpret CNNs. The *classification images* technique allows the authors to build an input specific bias map for CNNs, from different images that are randomly generated using different methods. STA is also used to visualize filters. The authors find interesting similarities between CNNs and animal/human brain such as the shift of the psychometric curve. The paper is well written and the experiments are thorough. This paper has comprehensive use of neuroscience method, nicely bringing the two fields together. Even though the authors mention other approaches, perhaps one weakness is the limited discussion of other visualization tools: what is the difference between other more frequent approaches such as activation maximization and the approach the authors present. What could we understand with these tools that cannot be understood through activation maximization (i.e. using backprop to find optimal stimuli)? One problem with classification images is that the pattern is limited to low level patterns unless the random images have structure in them. The authors solve this problem using more complex random sample (e.g. with Gabor features). But a more in depth discussion is necessarily about the additional understanding that is procured from such an approach in comparison to activation maximization. It*s an important discussion because even neuroscientists are now interested (and finally able) to properly perform activation maximization to test their hypotheses more strongly (see Bashivan et al 2019 science for a great example). Even though I think the lack of this discussion is currently a limitation, I think this paper is a good opportunity to have this discussion and bring together the specifics of these two different approaches.", "role": "assistant" } ]
[ "This is an interesting paper that uses two techniques from neuroscience (eletrophysiology) to interpret CNNs.", "The *classification images* technique allows the authors to build an input specific bias map for CNNs, from different images that are randomly generated using different methods.", "STA is also used to visualize filters.", "The authors find interesting similarities between CNNs and animal/human brain such as the shift of the psychometric curve.", "The paper is well written and the experiments are thorough.", "This paper has comprehensive use of neuroscience method, nicely bringing the two fields together.", "Even though the authors mention other approaches, perhaps one weakness is the limited discussion of other visualization tools: what is the difference between other more frequent approaches such as activation maximization and the approach the authors present.", "What could we understand with these tools that cannot be understood through activation maximization (i.e.", "using backprop to find optimal stimuli)?", "One problem with classification images is that the pattern is limited to low level patterns unless the random images have structure in them.", "The authors solve this problem using more complex random sample (e.g.", "with Gabor features).", "But a more in depth discussion is necessarily about the additional understanding that is procured from such an approach in comparison to activation maximization.", "It*s an important discussion because even neuroscientists are now interested (and finally able) to properly perform activation maximization to test their hypotheses more strongly (see Bashivan et al 2019 science for a great example).", "Even though I think the lack of this discussion is currently a limitation, I think this paper is a good opportunity to have this discussion and bring together the specifics of these two different approaches." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 6, "materials_and_methods": 13, "praise": 5, "presentation_and_reporting": 1, "results_and_discussion": 6, "suggestion_and_solution": 1, "total": 15 }
0.133333
0
0.4
0.866667
0.333333
0.066667
0.4
0.066667
15
1
0
1
1
1
1
1
1
2.266667
2.203537
0.06313
Tranquil Clouds: Neural Networks for Learning Temporally Coherent Features in Point Clouds
Point clouds, as a form of Lagrangian representation, allow for powerful and flexible applications in a large number of computational disciplines. We propose a novel deep-learning method to learn stable and temporally coherent feature spaces for points clouds that change over time. We identify a set of inherent problems with these approaches: without knowledge of the time dimension, the inferred solutions can exhibit strong flickering, and easy solutions to suppress this flickering can result in undesirable local minima that manifest themselves as halo structures. We propose a novel temporal loss function that takes into account higher time derivatives of the point positions, and encourages mingling, i.e., to prevent the aforementioned halos. We combine these techniques in a super-resolution method with a truncation approach to flexibly adapt the size of the generated positions. We show that our method works for large, deforming point sets from different sources to demonstrate the flexibility of our approach.
Summary: This paper proposed a deep network for point cloud sequence super-resolution/upsampling. Building on the basis of pointNet and PU-net, the main contribution of the paper is identifying the problem of temporal incoherence in the process of upsampling a point cloud shape representation as well as a training loss to encourage temporal coherence. In the cases showed in the paper, the proposed method seems effective comparing to previous work which is not done on sequence data. My main concern about the work is that the experimental evaluation is limited. Strengths: Interesting problem and novel idea. The proposed method is technically sound. From the provided results, the newly introduced training loss seems effective: the result sequences are visually more plausible and smooth. Weaknesses: Qualitative results are limited and in most cases seemingly simple. In the paper as well as the companion video, there are very few examples provided. The scale of the evaluation demonstration is not convincing enough for the readers that this work could be generalized to more complicated testing scenarios. Quantitative results are also limited. Since the method is handling the coherence of shape deformation over time, it would be much more convincing and helpful to introduce a dense-correspondence evaluation as a benchmark. For example, one can create ground-truth correspondence from parametric morphable models and evaluate the coherence of the sequence by comparing the generated results with the ground truth.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTranquil Clouds: Neural Networks for Learning Temporally Coherent Features in Point Clouds\nPoint clouds, as a form of Lagrangian representation, allow for powerful and flexible applications in a large number of computational disciplines. We propose a novel deep-learning method to learn stable and temporally coherent feature spaces for points clouds that change over time. We identify a set of inherent problems with these approaches: without knowledge of the time dimension, the inferred solutions can exhibit strong flickering, and easy solutions to suppress this flickering can result in undesirable local minima that manifest themselves as halo structures. We propose a novel temporal loss function that takes into account higher time derivatives of the point positions, and encourages mingling, i.e., to prevent the aforementioned halos. We combine these techniques in a super-resolution method with a truncation approach to flexibly adapt the size of the generated positions. We show that our method works for large, deforming point sets from different sources to demonstrate the flexibility of our approach.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary: This paper proposed a deep network for point cloud sequence super-resolution/upsampling. Building on the basis of pointNet and PU-net, the main contribution of the paper is identifying the problem of temporal incoherence in the process of upsampling a point cloud shape representation as well as a training loss to encourage temporal coherence. In the cases showed in the paper, the proposed method seems effective comparing to previous work which is not done on sequence data. My main concern about the work is that the experimental evaluation is limited. Strengths: Interesting problem and novel idea. The proposed method is technically sound. From the provided results, the newly introduced training loss seems effective: the result sequences are visually more plausible and smooth. Weaknesses: Qualitative results are limited and in most cases seemingly simple. In the paper as well as the companion video, there are very few examples provided. The scale of the evaluation demonstration is not convincing enough for the readers that this work could be generalized to more complicated testing scenarios. Quantitative results are also limited. Since the method is handling the coherence of shape deformation over time, it would be much more convincing and helpful to introduce a dense-correspondence evaluation as a benchmark. For example, one can create ground-truth correspondence from parametric morphable models and evaluate the coherence of the sequence by comparing the generated results with the ground truth.", "role": "assistant" } ]
[ "Summary: This paper proposed a deep network for point cloud sequence super-resolution/upsampling.", "Building on the basis of pointNet and PU-net, the main contribution of the paper is identifying the problem of temporal incoherence in the process of upsampling a point cloud shape representation as well as a training loss to encourage temporal coherence.", "In the cases showed in the paper, the proposed method seems effective comparing to previous work which is not done on sequence data.", "My main concern about the work is that the experimental evaluation is limited.", "Strengths: Interesting problem and novel idea.", "The proposed method is technically sound.", "From the provided results, the newly introduced training loss seems effective: the result sequences are visually more plausible and smooth.", "Weaknesses: Qualitative results are limited and in most cases seemingly simple.", "In the paper as well as the companion video, there are very few examples provided.", "The scale of the evaluation demonstration is not convincing enough for the readers that this work could be generalized to more complicated testing scenarios.", "Quantitative results are also limited.", "Since the method is handling the coherence of shape deformation over time, it would be much more convincing and helpful to introduce a dense-correspondence evaluation as a benchmark.", "For example, one can create ground-truth correspondence from parametric morphable models and evaluate the coherence of the sequence by comparing the generated results with the ground truth." ]
{ "criticism": 5, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 9, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 5, "suggestion_and_solution": 3, "total": 13 }
0.384615
0
0.153846
0.692308
0.307692
0.076923
0.384615
0.230769
13
1
0
1
1
1
1
1
1
2.230769
2.230769
0
Estimating counterfactual treatment outcomes over time through adversarially balanced representations
Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.
This work addresses the problem of causal inference in time-dependent treatment regimes. To address the problem, the authors propose an extension of the balancing representation for causal inference framework that seeks render the current treatment independent from a representation of the history of treatment and confounders. This is sensibly actualized within an RNN. The authors provide empirical results that demonstrate the proposed method performing very well in comparison to prior art. This paper is well written, proposes a sensible solution to a difficult problem, and performs well empirically. It would be nice to see theory that connects the imbalance to the expected error of the causal estimand, and a notion of hyperparameter tuning (the authors say they search over hyperparameters but do not specify how the parameters are ultimately selected. However, I don*t feel that either of those points outweigh the benefit of the paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEstimating counterfactual treatment outcomes over time through adversarially balanced representations\nIdentifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work addresses the problem of causal inference in time-dependent treatment regimes. To address the problem, the authors propose an extension of the balancing representation for causal inference framework that seeks render the current treatment independent from a representation of the history of treatment and confounders. This is sensibly actualized within an RNN. The authors provide empirical results that demonstrate the proposed method performing very well in comparison to prior art. This paper is well written, proposes a sensible solution to a difficult problem, and performs well empirically. It would be nice to see theory that connects the imbalance to the expected error of the causal estimand, and a notion of hyperparameter tuning (the authors say they search over hyperparameters but do not specify how the parameters are ultimately selected. However, I don*t feel that either of those points outweigh the benefit of the paper.", "role": "assistant" } ]
[ "This work addresses the problem of causal inference in time-dependent treatment regimes.", "To address the problem, the authors propose an extension of the balancing representation for causal inference framework that seeks render the current treatment independent from a representation of the history of treatment and confounders.", "This is sensibly actualized within an RNN.", "The authors provide empirical results that demonstrate the proposed method performing very well in comparison to prior art.", "This paper is well written, proposes a sensible solution to a difficult problem, and performs well empirically.", "It would be nice to see theory that connects the imbalance to the expected error of the causal estimand, and a notion of hyperparameter tuning (the authors say they search over hyperparameters but do not specify how the parameters are ultimately selected.", "However, I don*t feel that either of those points outweigh the benefit of the paper." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 3, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 2, "total": 7 }
0.285714
0
0.285714
0.428571
0.285714
0.142857
0.428571
0.285714
7
1
0
1
1
1
1
1
1
2.142857
1.57469
0.568167
Estimating counterfactual treatment outcomes over time through adversarially balanced representations
Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.
The paper adapts domain adversarial training to construct treatment invariant representation for adjusting time-varying confounding in counterfactual treatment outcome predictions over time. The proposed method extends the recent static settings of balancing representation in counterfactual inference to longitudinal settings, and it also overcomes the problem of high-variance of IPTW weighting in MSN based methods such as the current state-of-art method RMSN. The paper is very well written. Method is novel to me. Experiments are sufficient. However since I am not very familiar with this area, so there can be things I miss in the evaluation of this paper. There is one problem with the illustrations in Figure 1: predictions on the potential outcomes before the first treatment was given should have stayed on the same path, and then depart when different treatment was initiated (at different time)
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEstimating counterfactual treatment outcomes over time through adversarially balanced representations\nIdentifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper adapts domain adversarial training to construct treatment invariant representation for adjusting time-varying confounding in counterfactual treatment outcome predictions over time. The proposed method extends the recent static settings of balancing representation in counterfactual inference to longitudinal settings, and it also overcomes the problem of high-variance of IPTW weighting in MSN based methods such as the current state-of-art method RMSN. The paper is very well written. Method is novel to me. Experiments are sufficient. However since I am not very familiar with this area, so there can be things I miss in the evaluation of this paper. There is one problem with the illustrations in Figure 1: predictions on the potential outcomes before the first treatment was given should have stayed on the same path, and then depart when different treatment was initiated (at different time)", "role": "assistant" } ]
[ "The paper adapts domain adversarial training to construct treatment invariant representation for adjusting time-varying confounding in counterfactual treatment outcome predictions over time.", "The proposed method extends the recent static settings of balancing representation in counterfactual inference to longitudinal settings, and it also overcomes the problem of high-variance of IPTW weighting in MSN based methods such as the current state-of-art method RMSN.", "The paper is very well written.", "Method is novel to me.", "Experiments are sufficient.", "However since I am not very familiar with this area, so there can be things I miss in the evaluation of this paper.", "There is one problem with the illustrations in Figure 1: predictions on the potential outcomes before the first treatment was given should have stayed on the same path, and then depart when different treatment was initiated (at different time)" ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 0, "materials_and_methods": 6, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 7 }
0.285714
0.142857
0
0.857143
0.428571
0.142857
0.428571
0.142857
7
1
1
0
1
1
1
1
1
2.428571
1.860404
0.568167
Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation
Infinite horizon off-policy policy evaluation is a highly challenging task due to the excessively large variance of typical importance sampling (IS) estimators. Recently, Liu et al. (2018) proposed an approach that significantly reduces the variance of infinite-horizon off-policy evaluation by estimating the stationary density ratio, but at the cost of introducing potentially high risks due to the error in density ratio estimation. In this paper, we develop a bias-reduced augmentation of their method, which can take advantage of a learned value function to obtain higher accuracy. Our method is doubly robust in that the bias vanishes when either the density ratio or value function estimation is perfect. In general, when either of them is accurate, the bias can also be reduced. Both theoretical and empirical results show that our method yields significant advantages over previous methods.
Comments : This paper provides an approach for reducing bias in long horizon off-policy evaluation (OPE) problems, extending recent work from Liu et al., 2018 that estimates the ratio of the stationary state distributions in off-policy evaluation for reducing variance. The paper provides a doubly robust method for reducing bias, since it requires separately estimating a value function. The key idea of the paper is to provide low variance, low bias OPE since their approach relies on accurately estimating the state distribution ratio and also the estimation of the value function. The paper provides a follow up on Liu et al., 2018 and other recent works in off-policy learning that tries to estimate the state distribution ratio directly, but can introduce high inaccuracy if the obtained ratio estimates are inaccurate. In line of that, this approach seems useful as it tries to reduce the error from inaccurate ratio estimates by incorporating prior works with an additional value function estimation. Furthermore, the paper introduces a new perspective to doubly robust estimation that tries to reduce bias, instead of variance in previously known OPE literature (Jiang et al., 2016; Thomas et al., 2016). It is interesting to see how doubly robust can be related to primal-dual relations and the connections between these approaches, which is a novel contribution to the best of my knowledge. - The fundamental connection comes from observing OPE withdoubly robust estimator that estimates hat{V} and equation 6 that incorporates the ratio of the state density. This is clearly written in equations 7 and 8, that are two ways of evaluating the value function with off-policy samples known in the literature. - It uses the property of the Bellman recursive expression for the estimated value function of doubly robust OPE estimator and uses it in the OPE with state density ratio, leading to equation 9, and obtaining the bridge estimator that now relies on both estimates of value function hat{V} and the state density ratio. Although initially this does not seem useful, but the authors clarify this and how a bias reducting method can be achieved as in equation 11. - It is a nice and elegant trick, exploiting the connections between OPE estimators leading to a bias reduction method that seems quite interesting. - The most interesting part of the paper comes from section 4 that exploits the connection between doubly robust approaches with Lagrangian duality, and that their approach is equivalent to a primal dual formulation of policy evaluation. This stands out in itself as a novel contribution of the paper. Equations 14 and 15 best writes down the connections, as to how policy evaluation can be formualted as a Lagrangian function for optimization. - Although the authors point out how equation 15 is equivalent to equation 11 - this does not seem straight forward at first unless carefully looked at. I would encourate the authors to perhaps add more clarity that exploits this connection, to make the paper more self-contained. - In terms of experiments, the paper compares the doubly robust approach with previous works that estimates the density ratio, along with other baseline comparisons such as weighted DR. In both the two evaluated problems, their approach seems useful in terms of obtaining better accuracy (lower MSE). - However, I am not sure to what extent this approach can be scaled to more complicated tasks for OPE? Are there any example domains where the proposed method fails, or is difficult to scale up to more complicated tasks? - The current set of experiment results seems adequate, given the theoretical contribution and that most OPE papers evaluate on such domains. But overall, it might be useful to analyse the significance of this approach when scaling to more complicated domains. _x000b_ Overall, I think the paper has a neat and elegant theoretical contribution, exploiting the connection of OPE with primal-dual frameworks that seems quite novel to me. Experimental set of results are properly presented too showing significance of the approach compared to previously known baselines. I think such papers exploting connections with other literature is useful for the community in general, and this paper has significantly novel theoretical contribution. Hence, I would recommend for acceptance of this paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDoubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation\nInfinite horizon off-policy policy evaluation is a highly challenging task due to the excessively large variance of typical importance sampling (IS) estimators. Recently, Liu et al. (2018) proposed an approach that significantly reduces the variance of infinite-horizon off-policy evaluation by estimating the stationary density ratio, but at the cost of introducing potentially high risks due to the error in density ratio estimation. In this paper, we develop a bias-reduced augmentation of their method, which can take advantage of a learned value function to obtain higher accuracy. Our method is doubly robust in that the bias vanishes when either the density ratio or value function estimation is perfect. In general, when either of them is accurate, the bias can also be reduced. Both theoretical and empirical results show that our method yields significant advantages over previous methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nComments : This paper provides an approach for reducing bias in long horizon off-policy evaluation (OPE) problems, extending recent work from Liu et al., 2018 that estimates the ratio of the stationary state distributions in off-policy evaluation for reducing variance. The paper provides a doubly robust method for reducing bias, since it requires separately estimating a value function. The key idea of the paper is to provide low variance, low bias OPE since their approach relies on accurately estimating the state distribution ratio and also the estimation of the value function. The paper provides a follow up on Liu et al., 2018 and other recent works in off-policy learning that tries to estimate the state distribution ratio directly, but can introduce high inaccuracy if the obtained ratio estimates are inaccurate. In line of that, this approach seems useful as it tries to reduce the error from inaccurate ratio estimates by incorporating prior works with an additional value function estimation. Furthermore, the paper introduces a new perspective to doubly robust estimation that tries to reduce bias, instead of variance in previously known OPE literature (Jiang et al., 2016; Thomas et al., 2016). It is interesting to see how doubly robust can be related to primal-dual relations and the connections between these approaches, which is a novel contribution to the best of my knowledge. - The fundamental connection comes from observing OPE withdoubly robust estimator that estimates hat{V} and equation 6 that incorporates the ratio of the state density. This is clearly written in equations 7 and 8, that are two ways of evaluating the value function with off-policy samples known in the literature. - It uses the property of the Bellman recursive expression for the estimated value function of doubly robust OPE estimator and uses it in the OPE with state density ratio, leading to equation 9, and obtaining the bridge estimator that now relies on both estimates of value function hat{V} and the state density ratio. Although initially this does not seem useful, but the authors clarify this and how a bias reducting method can be achieved as in equation 11. - It is a nice and elegant trick, exploiting the connections between OPE estimators leading to a bias reduction method that seems quite interesting. - The most interesting part of the paper comes from section 4 that exploits the connection between doubly robust approaches with Lagrangian duality, and that their approach is equivalent to a primal dual formulation of policy evaluation. This stands out in itself as a novel contribution of the paper. Equations 14 and 15 best writes down the connections, as to how policy evaluation can be formualted as a Lagrangian function for optimization. - Although the authors point out how equation 15 is equivalent to equation 11 - this does not seem straight forward at first unless carefully looked at. I would encourate the authors to perhaps add more clarity that exploits this connection, to make the paper more self-contained. - In terms of experiments, the paper compares the doubly robust approach with previous works that estimates the density ratio, along with other baseline comparisons such as weighted DR. In both the two evaluated problems, their approach seems useful in terms of obtaining better accuracy (lower MSE). - However, I am not sure to what extent this approach can be scaled to more complicated tasks for OPE? Are there any example domains where the proposed method fails, or is difficult to scale up to more complicated tasks? - The current set of experiment results seems adequate, given the theoretical contribution and that most OPE papers evaluate on such domains. But overall, it might be useful to analyse the significance of this approach when scaling to more complicated domains. _x000b_ Overall, I think the paper has a neat and elegant theoretical contribution, exploiting the connection of OPE with primal-dual frameworks that seems quite novel to me. Experimental set of results are properly presented too showing significance of the approach compared to previously known baselines. I think such papers exploting connections with other literature is useful for the community in general, and this paper has significantly novel theoretical contribution. Hence, I would recommend for acceptance of this paper.", "role": "assistant" } ]
[ "Comments : This paper provides an approach for reducing bias in long horizon off-policy evaluation (OPE) problems, extending recent work from Liu et al., 2018 that estimates the ratio of the stationary state distributions in off-policy evaluation for reducing variance.", "The paper provides a doubly robust method for reducing bias, since it requires separately estimating a value function.", "The key idea of the paper is to provide low variance, low bias OPE since their approach relies on accurately estimating the state distribution ratio and also the estimation of the value function.", "The paper provides a follow up on Liu et al., 2018 and other recent works in off-policy learning that tries to estimate the state distribution ratio directly, but can introduce high inaccuracy if the obtained ratio estimates are inaccurate.", "In line of that, this approach seems useful as it tries to reduce the error from inaccurate ratio estimates by incorporating prior works with an additional value function estimation.", "Furthermore, the paper introduces a new perspective to doubly robust estimation that tries to reduce bias, instead of variance in previously known OPE literature (Jiang et al., 2016; Thomas et al., 2016).", "It is interesting to see how doubly robust can be related to primal-dual relations and the connections between these approaches, which is a novel contribution to the best of my knowledge.", "- The fundamental connection comes from observing OPE withdoubly robust estimator that estimates hat{V} and equation 6 that incorporates the ratio of the state density.", "This is clearly written in equations 7 and 8, that are two ways of evaluating the value function with off-policy samples known in the literature.", "- It uses the property of the Bellman recursive expression for the estimated value function of doubly robust OPE estimator and uses it in the OPE with state density ratio, leading to equation 9, and obtaining the bridge estimator that now relies on both estimates of value function hat{V} and the state density ratio.", "Although initially this does not seem useful, but the authors clarify this and how a bias reducting method can be achieved as in equation 11.", "- It is a nice and elegant trick, exploiting the connections between OPE estimators leading to a bias reduction method that seems quite interesting.", "- The most interesting part of the paper comes from section 4 that exploits the connection between doubly robust approaches with Lagrangian duality, and that their approach is equivalent to a primal dual formulation of policy evaluation.", "This stands out in itself as a novel contribution of the paper.", "Equations 14 and 15 best writes down the connections, as to how policy evaluation can be formualted as a Lagrangian function for optimization.", "- Although the authors point out how equation 15 is equivalent to equation 11 - this does not seem straight forward at first unless carefully looked at.", "I would encourate the authors to perhaps add more clarity that exploits this connection, to make the paper more self-contained.", "- In terms of experiments, the paper compares the doubly robust approach with previous works that estimates the density ratio, along with other baseline comparisons such as weighted DR.", "In both the two evaluated problems, their approach seems useful in terms of obtaining better accuracy (lower MSE).", "- However, I am not sure to what extent this approach can be scaled to more complicated tasks for OPE?", "Are there any example domains where the proposed method fails, or is difficult to scale up to more complicated tasks?", "- The current set of experiment results seems adequate, given the theoretical contribution and that most OPE papers evaluate on such domains.", "But overall, it might be useful to analyse the significance of this approach when scaling to more complicated domains.", "_x000b_ Overall, I think the paper has a neat and elegant theoretical contribution, exploiting the connection of OPE with primal-dual frameworks that seems quite novel to me.", "Experimental set of results are properly presented too showing significance of the approach compared to previously known baselines.", "I think such papers exploting connections with other literature is useful for the community in general, and this paper has significantly novel theoretical contribution.", "Hence, I would recommend for acceptance of this paper." ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 11, "materials_and_methods": 21, "praise": 8, "presentation_and_reporting": 3, "results_and_discussion": 11, "suggestion_and_solution": 4, "total": 27 }
0.111111
0.037037
0.407407
0.777778
0.296296
0.111111
0.407407
0.148148
27
1
1
1
1
1
1
1
1
2.296296
-0.79706
3.093356
Learning Compositional Koopman Operators for Model-Based Control
Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis. The Koopman operator theory lays the foundation for identifying the nonlinear-to-linear coordinate transformations with data-driven methods. Recently, researchers have proposed to use deep neural networks as a more expressive class of basis functions for calculating the Koopman operators. These approaches, however, assume a fixed dimensional state space; they are therefore not applicable to scenarios with a variable number of objects. In this paper, we propose to learn compositional Koopman operators, using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects. The learned dynamics can quickly adapt to new environments of unknown physical parameters and produce control signals to achieve a specified goal. Our experiments on manipulating ropes and controlling soft robots show that the proposed method has better efficiency and generalization ability than existing baselines.
This paper proposes to learn compositional Koopman operators using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects. The combination of deep Koopman operator with graph neural nets is very novel and interesting. The experiments are also well designed and the results shown in the paper also indicate the effectiveness and efficiency of the algorithm in both simulation and control. In conclusion, I think this work can inspire more wok into modeling larger and more complex systems by integrating the power of the Koopman theory and the expressiveness of neural networks.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Compositional Koopman Operators for Model-Based Control\nFinding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis. The Koopman operator theory lays the foundation for identifying the nonlinear-to-linear coordinate transformations with data-driven methods. Recently, researchers have proposed to use deep neural networks as a more expressive class of basis functions for calculating the Koopman operators. These approaches, however, assume a fixed dimensional state space; they are therefore not applicable to scenarios with a variable number of objects. In this paper, we propose to learn compositional Koopman operators, using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects. The learned dynamics can quickly adapt to new environments of unknown physical parameters and produce control signals to achieve a specified goal. Our experiments on manipulating ropes and controlling soft robots show that the proposed method has better efficiency and generalization ability than existing baselines.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to learn compositional Koopman operators using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects. The combination of deep Koopman operator with graph neural nets is very novel and interesting. The experiments are also well designed and the results shown in the paper also indicate the effectiveness and efficiency of the algorithm in both simulation and control. In conclusion, I think this work can inspire more wok into modeling larger and more complex systems by integrating the power of the Koopman theory and the expressiveness of neural networks.", "role": "assistant" } ]
[ "This paper proposes to learn compositional Koopman operators using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects.", "The combination of deep Koopman operator with graph neural nets is very novel and interesting.", "The experiments are also well designed and the results shown in the paper also indicate the effectiveness and efficiency of the algorithm in both simulation and control.", "In conclusion, I think this work can inspire more wok into modeling larger and more complex systems by integrating the power of the Koopman theory and the expressiveness of neural networks." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 3, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 2, "suggestion_and_solution": 0, "total": 4 }
0
0
0.5
0.75
0.5
0
0.5
0
4
0
0
1
1
1
0
1
0
2.25
0.971623
1.278377
Asymptotics of Wide Networks from Feynman Diagrams
Understanding the asymptotic behavior of wide networks is of considerable interest. In this work, we present a general method for analyzing this large width behavior. The method is an adaptation of Feynman diagrams, a standard tool for computing multivariate Gaussian integrals. We apply our method to study training dynamics, improving existing bounds and deriving new results on wide network evolution during stochastic gradient descent. Going beyond the strict large width limit, we present closed-form expressions for higher-order terms governing wide network training, and test these predictions empirically.
This paper proposes a new tool based on Feynman diagrams to analyze wide networks (e.g., feed-forward networks with one or more large hidden layers or CNNs with a large number of filters). The main contributions of the paper are: - a new method (using Feynman diagrams) to bound the asymptotic behavior of correlation functions (ensemble averages of the network functions and its derivatives). The method is presented as a conjecture. - tighter bounds for gradient flow of wide networks - an extended analysis of SGD for wide networks - a formalism for deriving finite-width corrections The study of (infinitely) wide networks has been active over the last few years. A better understanding of wide networks could, amongst other things, shed light on recent empirical results related to over-parametrized networks. As such, improving our theoretical understanding of wide networks and especially properties of finite-width networks, which is what this paper explores, seems significant and potentially very impactful.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAsymptotics of Wide Networks from Feynman Diagrams\nUnderstanding the asymptotic behavior of wide networks is of considerable interest. In this work, we present a general method for analyzing this large width behavior. The method is an adaptation of Feynman diagrams, a standard tool for computing multivariate Gaussian integrals. We apply our method to study training dynamics, improving existing bounds and deriving new results on wide network evolution during stochastic gradient descent. Going beyond the strict large width limit, we present closed-form expressions for higher-order terms governing wide network training, and test these predictions empirically.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a new tool based on Feynman diagrams to analyze wide networks (e.g., feed-forward networks with one or more large hidden layers or CNNs with a large number of filters). The main contributions of the paper are: - a new method (using Feynman diagrams) to bound the asymptotic behavior of correlation functions (ensemble averages of the network functions and its derivatives). The method is presented as a conjecture. - tighter bounds for gradient flow of wide networks - an extended analysis of SGD for wide networks - a formalism for deriving finite-width corrections The study of (infinitely) wide networks has been active over the last few years. A better understanding of wide networks could, amongst other things, shed light on recent empirical results related to over-parametrized networks. As such, improving our theoretical understanding of wide networks and especially properties of finite-width networks, which is what this paper explores, seems significant and potentially very impactful.", "role": "assistant" } ]
[ "This paper proposes a new tool based on Feynman diagrams to analyze wide networks (e.g., feed-forward networks with one or more large hidden layers or CNNs with a large number of filters).", "The main contributions of the paper are: - a new method (using Feynman diagrams) to bound the asymptotic behavior of correlation functions (ensemble averages of the network functions and its derivatives).", "The method is presented as a conjecture.", "- tighter bounds for gradient flow of wide networks - an extended analysis of SGD for wide networks - a formalism for deriving finite-width corrections The study of (infinitely) wide networks has been active over the last few years.", "A better understanding of wide networks could, amongst other things, shed light on recent empirical results related to over-parametrized networks.", "As such, improving our theoretical understanding of wide networks and especially properties of finite-width networks, which is what this paper explores, seems significant and potentially very impactful." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 4, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 6 }
0
0
0.833333
0.666667
0.166667
0
0.333333
0.166667
6
0
0
1
1
1
0
1
1
2.166667
1.393328
0.773339
Simplified Action Decoder for Deep Multi-Agent Reinforcement Learning
In recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota. One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum. In contrast to these settings, success in the real world commonly requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative. In the last year, the card game Hanabi has been established as a new benchmark environment for AI to fill this gap. In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e. the ability to effectively reason over the intentions, beliefs and point of view of other agents when observing their actions. Learning to be informative when observed by others is an interesting challenge for Reinforcement Learning (RL): Fundamentally, RL requires agents to explore in order to discover good policies. However, when done naively, this randomness will inherently make their actions less informative to others during training. We present a new deep multi-agent RL method, the Simplified Action Decoder (SAD), which resolves this contradiction exploiting the centralized training phase. During training SAD allows other agents to not only observe the (exploratory) action chosen, but agents instead also observe the greedy action of their team mates. By combining this simple intuition with an auxiliary task for state prediction and best practices for multi-agent learning, SAD establishes a new state of the art for 2-5 players on the self-play part of the Hanabi challenge.
The paper presents SAD (Simplified Action Decoder), a new method to address an issue in Centralized Training / Decentralized Control regimes: that exploratory actions during training can make those actions less informative for the agents who observe and attempt to learn from it. The method addresses this issue by allowing the agent to select two actions: one (the possibly exploratory action) is applied to the environment and the other (the greedy action) is presented to other agents as part of their observations for the next turn. The authors use a distributed recurrent DQN architecture and apply the resulting agent to a toy problem and to the Hanabi Learning Environment. The authors claim that the method offers improvement over the Bayesian Action Decoder (BAD), that has been similarly applied to the same environments, being simpler, more sample efficient and achieving overall higher scores, which is confirmed by their results: the agent outperforms BAD in 2-player Hanabi (where BAD was previously state-of-the-art) and the best scores out of any learning agent (although not as high as some non-learning agents such as WTFWThat in the 3-5 player versions. The paper does not directly address the ad-hoc cooperation aspect of Hanabi, and it is unclear wheter the method could be used as-is for that problem, due to its reliance on centralized training. Nevertheless, the paper represents a relevant improvement to the self-play aspect of the game, and the core insight that the method leverages could conceivably be applied to minimize the noise introduced by exploration in other cooperative CT/DC problems. For this reason, I recommend the paper to be accepted. Typos/language issues: Introduction: “spend vast amounts of time coordinate” -> coordinating Section 3.1: “While our method are general” -> methods *accomplish a state of the art in Hanabi* I see what you mean but this is a strange phrasing.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSimplified Action Decoder for Deep Multi-Agent Reinforcement Learning\nIn recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota. One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum. In contrast to these settings, success in the real world commonly requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative. In the last year, the card game Hanabi has been established as a new benchmark environment for AI to fill this gap. In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e. the ability to effectively reason over the intentions, beliefs and point of view of other agents when observing their actions. Learning to be informative when observed by others is an interesting challenge for Reinforcement Learning (RL): Fundamentally, RL requires agents to explore in order to discover good policies. However, when done naively, this randomness will inherently make their actions less informative to others during training. We present a new deep multi-agent RL method, the Simplified Action Decoder (SAD), which resolves this contradiction exploiting the centralized training phase. During training SAD allows other agents to not only observe the (exploratory) action chosen, but agents instead also observe the greedy action of their team mates. By combining this simple intuition with an auxiliary task for state prediction and best practices for multi-agent learning, SAD establishes a new state of the art for 2-5 players on the self-play part of the Hanabi challenge.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents SAD (Simplified Action Decoder), a new method to address an issue in Centralized Training / Decentralized Control regimes: that exploratory actions during training can make those actions less informative for the agents who observe and attempt to learn from it. The method addresses this issue by allowing the agent to select two actions: one (the possibly exploratory action) is applied to the environment and the other (the greedy action) is presented to other agents as part of their observations for the next turn. The authors use a distributed recurrent DQN architecture and apply the resulting agent to a toy problem and to the Hanabi Learning Environment. The authors claim that the method offers improvement over the Bayesian Action Decoder (BAD), that has been similarly applied to the same environments, being simpler, more sample efficient and achieving overall higher scores, which is confirmed by their results: the agent outperforms BAD in 2-player Hanabi (where BAD was previously state-of-the-art) and the best scores out of any learning agent (although not as high as some non-learning agents such as WTFWThat in the 3-5 player versions. The paper does not directly address the ad-hoc cooperation aspect of Hanabi, and it is unclear wheter the method could be used as-is for that problem, due to its reliance on centralized training. Nevertheless, the paper represents a relevant improvement to the self-play aspect of the game, and the core insight that the method leverages could conceivably be applied to minimize the noise introduced by exploration in other cooperative CT/DC problems. For this reason, I recommend the paper to be accepted. Typos/language issues: Introduction: “spend vast amounts of time coordinate” -> coordinating Section 3.1: “While our method are general” -> methods *accomplish a state of the art in Hanabi* I see what you mean but this is a strange phrasing.", "role": "assistant" } ]
[ "The paper presents SAD (Simplified Action Decoder), a new method to address an issue in Centralized Training / Decentralized Control regimes: that exploratory actions during training can make those actions less informative for the agents who observe and attempt to learn from it.", "The method addresses this issue by allowing the agent to select two actions: one (the possibly exploratory action) is applied to the environment and the other (the greedy action) is presented to other agents as part of their observations for the next turn.", "The authors use a distributed recurrent DQN architecture and apply the resulting agent to a toy problem and to the Hanabi Learning Environment.", "The authors claim that the method offers improvement over the Bayesian Action Decoder (BAD), that has been similarly applied to the same environments, being simpler, more sample efficient and achieving overall higher scores, which is confirmed by their results: the agent outperforms BAD in 2-player Hanabi (where BAD was previously state-of-the-art) and the best scores out of any learning agent (although not as high as some non-learning agents such as WTFWThat in the 3-5 player versions.", "The paper does not directly address the ad-hoc cooperation aspect of Hanabi, and it is unclear wheter the method could be used as-is for that problem, due to its reliance on centralized training.", "Nevertheless, the paper represents a relevant improvement to the self-play aspect of the game, and the core insight that the method leverages could conceivably be applied to minimize the noise introduced by exploration in other cooperative CT/DC problems.", "For this reason, I recommend the paper to be accepted.", "Typos/language issues: Introduction: “spend vast amounts of time coordinate” -> coordinating Section 3.1: “While our method are general” -> methods *accomplish a state of the art in Hanabi* I see what you mean but this is a strange phrasing." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 7, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 8 }
0.25
0
0.25
0.875
0.25
0.125
0.375
0.125
8
1
0
1
1
1
1
1
1
2.25
1.855439
0.394561
Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension
Integrating distributed representations with symbolic operations is essential for reading comprehension requiring complex reasoning, such as counting, sorting and arithmetics, but most existing approaches are hard to scale to more domains or more complex reasoning. In this work, we propose the Neural Symbolic Reader (NeRd), which includes a reader, e.g., BERT, to encode the passage and question, and a programmer, e.g., LSTM, to generate a program that is executed to produce the answer. Compared to previous works, NeRd is more scalable in two aspects: (1) domain-agnostic, i.e., the same neural architecture works for different domains; (2) compositional, i.e., when needed, complex programs can be generated by recursively applying the predefined operators, which become executable and interpretable representations for more complex reasoning. Furthermore, to overcome the challenge of training NeRd with weak supervision, we apply data augmentation techniques and hard Expectation-Maximization (EM) with thresholding. On DROP, a challenging reading comprehension dataset that requires discrete reasoning, NeRd achieves 2.5%/1.8% absolute improvement over the state-of-the-art on EM/F1 metrics. With the same architecture, NeRd significantly outperforms the baselines on MathQA, a math problem benchmark that requires multiple steps of reasoning, by 25.5% absolute increment on accuracy when trained on all the annotated programs. More importantly, NeRd still beats the baselines even when only 20% of the program annotations are given.
-- define cascade errors when you first use the phrase -- basic english grammar could be fixed but is not interfering with understanding -- what is the early stopping criterion in Alg 1? -- did you try any other values for the initial threshold and decay factor? -- At the end of Section 4.1 DROP: *, but not the same.* - I can*t parse what this last clause is supposed to mean. --the diff sum example in table 2 is confusing; the program appears to sum up the numbers but the result is a subtraction without a sum operation in it. Would be clearer to show the sum in the result line as well rather than distribute the subtraction. Also, shouldn*t it be diff(9, sum(10, 12))? -- I think you should pull at least some commentary about the constant used in Table 3 from Appendix B and include it in the main paper (or at least mention Appendix B is the place to look). Can you add a table in an appendix showing the complete list of operators used? -- Nice results in Table 4 on the dev set. Are there Test set results as well? -- The organization of the Baselines 4.3 section and the Results 4.4 is confusing. For example, you mention that you test different variants of NeRd, operator variants, and mathqa, but then the results are not mentioned for these experiments until the next page. I found myself immediately looking for the numbers/results when you introduce the experiment. I would pair your experiment description with the results rather than grouping all experiment descriptions and then grouping all results, especially when the order of the experiment descriptions does not match the order of the results presented. For example, in baselines you discuss training variants and then operators. Then in Results you discuss operators before variants. It is too disconnected and makes the reader jump around a bunch. Same goes for the drop baselines where you mention a bunch of models, and I would prefer the Results/discussion paired with each one, rather than having to wait for it down below. -- Overall it seems like a solid work; good empirical results showing improvements of each purported contribution. The model itself is a relatively simple construction of basic component, but the combination with the DSL is intuitive and makes sense. I don*t think the novelty in model here is the main selling point anyways; the training variants and the demonstration of how well a DSL approach can do combined with previously introduced methods. -- I find the model description to be slightly unclear. In Fig 1 for example there is an arrow that connects passage to compositional programs. What does that arrow represent? I think you should elaborate on how the attention over the encoded text interacts with the attention over previously generated tokens. Equations would make this far more explicit as is I am left with a lot of questions on how to implement your model. Maybe you can add to your appendix? Or release your code? That is mentioned either.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension\nIntegrating distributed representations with symbolic operations is essential for reading comprehension requiring complex reasoning, such as counting, sorting and arithmetics, but most existing approaches are hard to scale to more domains or more complex reasoning. In this work, we propose the Neural Symbolic Reader (NeRd), which includes a reader, e.g., BERT, to encode the passage and question, and a programmer, e.g., LSTM, to generate a program that is executed to produce the answer. Compared to previous works, NeRd is more scalable in two aspects: (1) domain-agnostic, i.e., the same neural architecture works for different domains; (2) compositional, i.e., when needed, complex programs can be generated by recursively applying the predefined operators, which become executable and interpretable representations for more complex reasoning. Furthermore, to overcome the challenge of training NeRd with weak supervision, we apply data augmentation techniques and hard Expectation-Maximization (EM) with thresholding. On DROP, a challenging reading comprehension dataset that requires discrete reasoning, NeRd achieves 2.5%/1.8% absolute improvement over the state-of-the-art on EM/F1 metrics. With the same architecture, NeRd significantly outperforms the baselines on MathQA, a math problem benchmark that requires multiple steps of reasoning, by 25.5% absolute increment on accuracy when trained on all the annotated programs. More importantly, NeRd still beats the baselines even when only 20% of the program annotations are given.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n-- define cascade errors when you first use the phrase -- basic english grammar could be fixed but is not interfering with understanding -- what is the early stopping criterion in Alg 1? -- did you try any other values for the initial threshold and decay factor? -- At the end of Section 4.1 DROP: *, but not the same.* - I can*t parse what this last clause is supposed to mean. --the diff sum example in table 2 is confusing; the program appears to sum up the numbers but the result is a subtraction without a sum operation in it. Would be clearer to show the sum in the result line as well rather than distribute the subtraction. Also, shouldn*t it be diff(9, sum(10, 12))? -- I think you should pull at least some commentary about the constant used in Table 3 from Appendix B and include it in the main paper (or at least mention Appendix B is the place to look). Can you add a table in an appendix showing the complete list of operators used? -- Nice results in Table 4 on the dev set. Are there Test set results as well? -- The organization of the Baselines 4.3 section and the Results 4.4 is confusing. For example, you mention that you test different variants of NeRd, operator variants, and mathqa, but then the results are not mentioned for these experiments until the next page. I found myself immediately looking for the numbers/results when you introduce the experiment. I would pair your experiment description with the results rather than grouping all experiment descriptions and then grouping all results, especially when the order of the experiment descriptions does not match the order of the results presented. For example, in baselines you discuss training variants and then operators. Then in Results you discuss operators before variants. It is too disconnected and makes the reader jump around a bunch. Same goes for the drop baselines where you mention a bunch of models, and I would prefer the Results/discussion paired with each one, rather than having to wait for it down below. -- Overall it seems like a solid work; good empirical results showing improvements of each purported contribution. The model itself is a relatively simple construction of basic component, but the combination with the DSL is intuitive and makes sense. I don*t think the novelty in model here is the main selling point anyways; the training variants and the demonstration of how well a DSL approach can do combined with previously introduced methods. -- I find the model description to be slightly unclear. In Fig 1 for example there is an arrow that connects passage to compositional programs. What does that arrow represent? I think you should elaborate on how the attention over the encoded text interacts with the attention over previously generated tokens. Equations would make this far more explicit as is I am left with a lot of questions on how to implement your model. Maybe you can add to your appendix? Or release your code? That is mentioned either.", "role": "assistant" } ]
[ "-- define cascade errors when you first use the phrase -- basic english grammar could be fixed but is not interfering with understanding -- what is the early stopping criterion in Alg 1?", "-- did you try any other values for the initial threshold and decay factor?", "-- At the end of Section 4.1 DROP: *, but not the same.", "* - I can*t parse what this last clause is supposed to mean.", "--the diff sum example in table 2 is confusing; the program appears to sum up the numbers but the result is a subtraction without a sum operation in it.", "Would be clearer to show the sum in the result line as well rather than distribute the subtraction.", "Also, shouldn*t it be diff(9, sum(10, 12))?", "-- I think you should pull at least some commentary about the constant used in Table 3 from Appendix B and include it in the main paper (or at least mention Appendix B is the place to look).", "Can you add a table in an appendix showing the complete list of operators used?", "-- Nice results in Table 4 on the dev set.", "Are there Test set results as well?", "-- The organization of the Baselines 4.3 section and the Results 4.4 is confusing.", "For example, you mention that you test different variants of NeRd, operator variants, and mathqa, but then the results are not mentioned for these experiments until the next page.", "I found myself immediately looking for the numbers/results when you introduce the experiment.", "I would pair your experiment description with the results rather than grouping all experiment descriptions and then grouping all results, especially when the order of the experiment descriptions does not match the order of the results presented.", "For example, in baselines you discuss training variants and then operators.", "Then in Results you discuss operators before variants.", "It is too disconnected and makes the reader jump around a bunch.", "Same goes for the drop baselines where you mention a bunch of models, and I would prefer the Results/discussion paired with each one, rather than having to wait for it down below.", "-- Overall it seems like a solid work; good empirical results showing improvements of each purported contribution.", "The model itself is a relatively simple construction of basic component, but the combination with the DSL is intuitive and makes sense.", "I don*t think the novelty in model here is the main selling point anyways; the training variants and the demonstration of how well a DSL approach can do combined with previously introduced methods.", "-- I find the model description to be slightly unclear.", "In Fig 1 for example there is an arrow that connects passage to compositional programs.", "What does that arrow represent?", "I think you should elaborate on how the attention over the encoded text interacts with the attention over previously generated tokens.", "Equations would make this far more explicit as is I am left with a lot of questions on how to implement your model.", "Maybe you can add to your appendix?", "Or release your code?", "That is mentioned either." ]
{ "criticism": 7, "example": 4, "importance_and_relevance": 1, "materials_and_methods": 13, "praise": 1, "presentation_and_reporting": 21, "results_and_discussion": 12, "suggestion_and_solution": 10, "total": 30 }
0.233333
0.133333
0.033333
0.433333
0.033333
0.7
0.4
0.333333
30
1
1
1
1
1
1
1
1
2.3
-2.261122
4.561122