title
stringlengths
2
153
abstract
stringlengths
0
2.15k
review
stringlengths
97
8.43k
source
stringclasses
4 values
review_src
stringclasses
1 value
thinking_trace
stringclasses
1 value
conversations
listlengths
2
2
review_sentences
sequencelengths
1
61
criteria_count
dict
criticism
float64
0
1
example
float64
0
1
importance_and_relevance
float64
0
1
materials_and_methods
float64
0
1
praise
float64
0
1
presentation_and_reporting
float64
0
1
results_and_discussion
float64
0
1
suggestion_and_solution
float64
0
1
total
int64
1
61
has_criticism
int64
0
1
has_example
int64
0
1
has_importance_and_relevance
int64
0
1
has_materials_and_methods
int64
0
1
has_praise
int64
0
1
has_presentation_and_reporting
int64
0
1
has_results_and_discussion
int64
0
1
has_suggestion_and_solution
int64
0
1
reward_value
float64
2.13
6
reward_value_length_adjusted
float64
-34.13
3.73
length_penalty
float64
0
36.4
Mixed Setting Training Methods for Incremental Slot-Filling Tasks
Model training remains a dominant financial cost and time investment in machine learning applications. Developing and debugging models often involve iterative training, further exacerbating this issue. With growing interest in increasingly complex models, there is a need for techniques that help to reduce overall training effort. While incremental training can save substantial time and cost by training an existing model on a small subset of data, little work has explored policies for determining when incremental training provides adequate model performance versus full retraining. We provide a method-agnostic algorithm for deciding when to incrementally train versus fully train. We call this setting of non-deterministic full- or incremental training ``Mixed Setting Training*. Upon evaluation in slot-filling tasks, we find that this algorithm provides a bounded error, avoids catastrophic forgetting, and results in a significant speedup over a policy of always fully training.
This paper treats the important problem of when a model should be re-trained from scratch or incrementally trained as new data becomes available. Unfortunately, it is very hard to extract any useful conclusion out of this 6-pages paper, and I think considerable work is required before it can be useful for the community. The problem tackled is very important, but it is not very well formulated (e.g. their notation keeps changing along the paper, the assumptions are never properly stated, etc.) and their proposed solution is quite confusing, specially the “Reservoir sampling” part. Furthermore, the experimental section is extremely poor: the experiments performed are not well explained, the reported results are very weakly linked to their proposed method, there are no error bars what so ever in any of their single-curve plots, and even if we make an effort to understand which of the curves is their method, there doesn’t seem to be any substantial difference with respect to the other (baseline?). There are considerable typos throughout the paper, some of them considerably obscuring the meaning of their sentences. Just to report a few: - Fig 2 legend (which by the way is not referred anywhere in the text…) “A one-shot SGD settings as a function of ratio of initial to incremental data.” - Are “N” and “n” used interchangably in the notation? As all their notation, we encourage them to fix it and define it clearly. - “The reservoir of samples is inferred before a training epoch”. Do they mean sampled, or that they run the inference of their model on all the data-points in the reservoir? Other misc comments: - Instead of saying that your model is characterized by a weight matrix W, you should say “the weights of our model” or the “parameters of the model”. In modern Machine Learning this might be much more than a single weight matrix. - It is very hard to understand the utility of the reservoir sampling, and there should be a comparison to other simpler ways of sampling from the available data (like uniformly sampling n elements). No ablation studies what so ever are performed.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMixed Setting Training Methods for Incremental Slot-Filling Tasks\nModel training remains a dominant financial cost and time investment in machine learning applications. Developing and debugging models often involve iterative training, further exacerbating this issue. With growing interest in increasingly complex models, there is a need for techniques that help to reduce overall training effort. While incremental training can save substantial time and cost by training an existing model on a small subset of data, little work has explored policies for determining when incremental training provides adequate model performance versus full retraining. We provide a method-agnostic algorithm for deciding when to incrementally train versus fully train. We call this setting of non-deterministic full- or incremental training ``Mixed Setting Training*. Upon evaluation in slot-filling tasks, we find that this algorithm provides a bounded error, avoids catastrophic forgetting, and results in a significant speedup over a policy of always fully training.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper treats the important problem of when a model should be re-trained from scratch or incrementally trained as new data becomes available. Unfortunately, it is very hard to extract any useful conclusion out of this 6-pages paper, and I think considerable work is required before it can be useful for the community. The problem tackled is very important, but it is not very well formulated (e.g. their notation keeps changing along the paper, the assumptions are never properly stated, etc.) and their proposed solution is quite confusing, specially the “Reservoir sampling” part. Furthermore, the experimental section is extremely poor: the experiments performed are not well explained, the reported results are very weakly linked to their proposed method, there are no error bars what so ever in any of their single-curve plots, and even if we make an effort to understand which of the curves is their method, there doesn’t seem to be any substantial difference with respect to the other (baseline?). There are considerable typos throughout the paper, some of them considerably obscuring the meaning of their sentences. Just to report a few: - Fig 2 legend (which by the way is not referred anywhere in the text…) “A one-shot SGD settings as a function of ratio of initial to incremental data.” - Are “N” and “n” used interchangably in the notation? As all their notation, we encourage them to fix it and define it clearly. - “The reservoir of samples is inferred before a training epoch”. Do they mean sampled, or that they run the inference of their model on all the data-points in the reservoir? Other misc comments: - Instead of saying that your model is characterized by a weight matrix W, you should say “the weights of our model” or the “parameters of the model”. In modern Machine Learning this might be much more than a single weight matrix. - It is very hard to understand the utility of the reservoir sampling, and there should be a comparison to other simpler ways of sampling from the available data (like uniformly sampling n elements). No ablation studies what so ever are performed.", "role": "assistant" } ]
[ "This paper treats the important problem of when a model should be re-trained from scratch or incrementally trained as new data becomes available.", "Unfortunately, it is very hard to extract any useful conclusion out of this 6-pages paper, and I think considerable work is required before it can be useful for the community.", "The problem tackled is very important, but it is not very well formulated (e.g.", "their notation keeps changing along the paper, the assumptions are never properly stated, etc.)", "and their proposed solution is quite confusing, specially the “Reservoir sampling” part.", "Furthermore, the experimental section is extremely poor: the experiments performed are not well explained, the reported results are very weakly linked to their proposed method, there are no error bars what so ever in any of their single-curve plots, and even if we make an effort to understand which of the curves is their method, there doesn’t seem to be any substantial difference with respect to the other (baseline?).", "There are considerable typos throughout the paper, some of them considerably obscuring the meaning of their sentences.", "Just to report a few: - Fig 2 legend (which by the way is not referred anywhere in the text…) “A one-shot SGD settings as a function of ratio of initial to incremental data.” - Are “N” and “n” used interchangably in the notation?", "As all their notation, we encourage them to fix it and define it clearly.", "- “The reservoir of samples is inferred before a training epoch”.", "Do they mean sampled, or that they run the inference of their model on all the data-points in the reservoir?", "Other misc comments: - Instead of saying that your model is characterized by a weight matrix W, you should say “the weights of our model” or the “parameters of the model”.", "In modern Machine Learning this might be much more than a single weight matrix.", "- It is very hard to understand the utility of the reservoir sampling, and there should be a comparison to other simpler ways of sampling from the available data (like uniformly sampling n elements).", "No ablation studies what so ever are performed." ]
{ "criticism": 7, "example": 1, "importance_and_relevance": 3, "materials_and_methods": 10, "praise": 0, "presentation_and_reporting": 8, "results_and_discussion": 2, "suggestion_and_solution": 3, "total": 15 }
0.466667
0.066667
0.2
0.666667
0
0.533333
0.133333
0.2
15
1
1
1
1
0
1
1
1
2.266667
2.203537
0.06313
BERT Wears GloVes: Distilling Static Embeddings from Pretrained Contextual Representations
Contextualized word representations such as ELMo and BERT have become the de facto starting point for incorporating pretrained representations for downstream NLP tasks. In these settings, contextual representations have largely made obsolete their static embedding predecessors such as Word2Vec and GloVe. However, static embeddings do have their advantages in that they are straightforward to understand and faster to use. Additionally, embedding analysis methods for static embeddings are far more diverse and mature than those available for their dynamic counterparts. In this work, we introduce simple methods for generating static lookup table embeddings from existing pretrained contextual representations and demonstrate they outperform Word2Vec and GloVe embeddings on a variety of word similarity and word relatedness tasks. In doing so, our results also reveal insights that may be useful for subsequent downstream tasks using our embeddings or the original contextual models. Further, we demonstrate the increased potential for analysis by applying existing approaches for estimating social bias in word embeddings. Our analysis constitutes the most comprehensive study of social bias in contextual word representations (via the proxy of our distilled embeddings) and reveals a number of inconsistencies in current techniques for quantifying social bias in word embeddings. We publicly release our code and distilled word embeddings to support reproducible research and the broader NLP community.
This paper introduces methods for distilling pretrained contextual representation to a static embeddings for the faster use. It proposes subword pooling and context combination, and its contribution is demonstrated with a suite of classic experiments for static word embeddings. My score for this paper is weakly rejected because (1) the motivation of the proposed approach is not clear. The paper title is about “distill”, but I can’t find out what the methods try to distill, what is the objective; (2) the goal of the paper is to obtain a static embedding performing like ELMO or BERT. As we know, the power of the contextual embedding is demonstrated by different NLP downstreaming task like MT, QA and a bunch of text classification tasks. In this paper, I don’t find those experiments; (3) this paper compared the results with word2vec and glove, but not clear which version to compare. And there are some re-embedding approaches that improve word2vec and glove such as “Word Re-Embedding via Manifold Dimensionality Retention”, the table in this re-embedding paper shows different version of glove and some of them are better than the proposed methods distilled from BERT.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBERT Wears GloVes: Distilling Static Embeddings from Pretrained Contextual Representations\nContextualized word representations such as ELMo and BERT have become the de facto starting point for incorporating pretrained representations for downstream NLP tasks. In these settings, contextual representations have largely made obsolete their static embedding predecessors such as Word2Vec and GloVe. However, static embeddings do have their advantages in that they are straightforward to understand and faster to use. Additionally, embedding analysis methods for static embeddings are far more diverse and mature than those available for their dynamic counterparts. In this work, we introduce simple methods for generating static lookup table embeddings from existing pretrained contextual representations and demonstrate they outperform Word2Vec and GloVe embeddings on a variety of word similarity and word relatedness tasks. In doing so, our results also reveal insights that may be useful for subsequent downstream tasks using our embeddings or the original contextual models. Further, we demonstrate the increased potential for analysis by applying existing approaches for estimating social bias in word embeddings. Our analysis constitutes the most comprehensive study of social bias in contextual word representations (via the proxy of our distilled embeddings) and reveals a number of inconsistencies in current techniques for quantifying social bias in word embeddings. We publicly release our code and distilled word embeddings to support reproducible research and the broader NLP community.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces methods for distilling pretrained contextual representation to a static embeddings for the faster use. It proposes subword pooling and context combination, and its contribution is demonstrated with a suite of classic experiments for static word embeddings. My score for this paper is weakly rejected because (1) the motivation of the proposed approach is not clear. The paper title is about “distill”, but I can’t find out what the methods try to distill, what is the objective; (2) the goal of the paper is to obtain a static embedding performing like ELMO or BERT. As we know, the power of the contextual embedding is demonstrated by different NLP downstreaming task like MT, QA and a bunch of text classification tasks. In this paper, I don’t find those experiments; (3) this paper compared the results with word2vec and glove, but not clear which version to compare. And there are some re-embedding approaches that improve word2vec and glove such as “Word Re-Embedding via Manifold Dimensionality Retention”, the table in this re-embedding paper shows different version of glove and some of them are better than the proposed methods distilled from BERT.", "role": "assistant" } ]
[ "This paper introduces methods for distilling pretrained contextual representation to a static embeddings for the faster use.", "It proposes subword pooling and context combination, and its contribution is demonstrated with a suite of classic experiments for static word embeddings.", "My score for this paper is weakly rejected because (1) the motivation of the proposed approach is not clear.", "The paper title is about “distill”, but I can’t find out what the methods try to distill, what is the objective; (2) the goal of the paper is to obtain a static embedding performing like ELMO or BERT.", "As we know, the power of the contextual embedding is demonstrated by different NLP downstreaming task like MT, QA and a bunch of text classification tasks.", "In this paper, I don’t find those experiments; (3) this paper compared the results with word2vec and glove, but not clear which version to compare.", "And there are some re-embedding approaches that improve word2vec and glove such as “Word Re-Embedding via Manifold Dimensionality Retention”, the table in this re-embedding paper shows different version of glove and some of them are better than the proposed methods distilled from BERT." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 7 }
0.428571
0
0.142857
1
0.142857
0.428571
0.142857
0
7
1
0
1
1
1
1
1
0
2.285714
1.717547
0.568167
On the Distribution of Penultimate Activations of Classification Networks
This paper considers probability distributions of penultimate activations in deep classi?cation networks. We ?rst identify a dual relation between the activations and the weights of the ?nal fully connected layer: learning the networks with the cross-entropy loss makes their (normalized) penultimate activations follow a von Mises-Fisher distribution for each class, which is parameterized by the weights of the ?nal fully-connected layer. Through this analysis, we derive a probability density function of penultimate activations per class. This generative model allows us to synthesize activations of classi?cation networks without feeding images forward through them. We also demonstrate through experiments that our generative model of penultimate activations can be applied to real-world applications such as knowledge distillation and class-conditional image generation.
This paper studies the distribution of the second-to-last layers of neural networks. The paper argues that, under certain assumptions, minimizing the standard cross-entropy loss of a classification task corresponds to minimizing the KL divergence between the distribution of the penultimate activations and a von Mises-Fisher (vMF) distribution. They assess their assumptions through experiments, and use this interpretation of the penultimate activations as vMF distributions on two tasks: knowledge distillation and class-conditional image generation. They argue that this interpretation outperforms other knowledge distillation methods under domain shift, and report a similar success for class conditional image generation. This is an important problem, especially since these penultimate activations have become more frequently used as object representations in downstream tasks. Their result could also help us understand better interpretations of neural networks. The writing is clear throughout, and it*s straightforward to read. The results are based on assumptions that I will try to list below: * normalizing the penultimate activations does not affect classification performance * the class-specific vectors in the final weight matrix have constant normalizations * minimizing cross-entropy loss is equivalent to minimizing the KL divergence between the penultimate activation distribution and a vMF distribution (i.e. the entropy term in equation 6 is negligible) In light of these assumptions, theoretical claims like *learning the networks with the cross-entropy loss makes their (normalized) penultimate activations follow a von Mises-Fisher distribution for each class* in the abstract are false; even if the normalization assumptions are true, this is an approximate result, due to 1) no guarantee that the KL-divergence will be zero and 2) the entropy term in Equation 6. Additionally, Equation 6 is only applied to the numerator of the objective in Equation 4, so the objectives aren*t equivalent. These results are thus necessarily based on approximations and some heuristics. This is not inherently a problem, but I would argue that it necessitates that experiments prove these assumptions to be useful. I find Table 1 to be convincing of the assumption that the weight and activation normalizations are negligible. Figure 1 is also a nice visualization for arguing that penultimate activations are approximately vMF. The MNIST experiment in Table 4 is a red flag, since the presented results do not match the results in the HVAE paper. The paper claims the experimental setup is identical to the HVAE paper, so it is concerning that the results do not match, especially since the results in the HVAE paper appear to outperform the proposed method. It is possible that the results in the original paper are not reproducible or that the experiments differ, but this should be stated in the paper, since the natural assumption would be that the baseline was not adequately reproduced. Small comment: In Table 2 it is not clear from the figure or the caption that CKD is the method the paper is proposing. It would also make sense to bold the best performing method since you do that for the other tables in the paper. I like the paper and think the results are interesting, but because they are based on heuristics and assumptions, the paper currently promises too much. This puts the burden on presenting experiments that are convincing of improvement, which is not currently the case with the HVAE experiment. I would be open to changing my score if these reviews are addressed.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the Distribution of Penultimate Activations of Classification Networks\nThis paper considers probability distributions of penultimate activations in deep classi?cation networks. We ?rst identify a dual relation between the activations and the weights of the ?nal fully connected layer: learning the networks with the cross-entropy loss makes their (normalized) penultimate activations follow a von Mises-Fisher distribution for each class, which is parameterized by the weights of the ?nal fully-connected layer. Through this analysis, we derive a probability density function of penultimate activations per class. This generative model allows us to synthesize activations of classi?cation networks without feeding images forward through them. We also demonstrate through experiments that our generative model of penultimate activations can be applied to real-world applications such as knowledge distillation and class-conditional image generation.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper studies the distribution of the second-to-last layers of neural networks. The paper argues that, under certain assumptions, minimizing the standard cross-entropy loss of a classification task corresponds to minimizing the KL divergence between the distribution of the penultimate activations and a von Mises-Fisher (vMF) distribution. They assess their assumptions through experiments, and use this interpretation of the penultimate activations as vMF distributions on two tasks: knowledge distillation and class-conditional image generation. They argue that this interpretation outperforms other knowledge distillation methods under domain shift, and report a similar success for class conditional image generation. This is an important problem, especially since these penultimate activations have become more frequently used as object representations in downstream tasks. Their result could also help us understand better interpretations of neural networks. The writing is clear throughout, and it*s straightforward to read. The results are based on assumptions that I will try to list below: * normalizing the penultimate activations does not affect classification performance * the class-specific vectors in the final weight matrix have constant normalizations * minimizing cross-entropy loss is equivalent to minimizing the KL divergence between the penultimate activation distribution and a vMF distribution (i.e. the entropy term in equation 6 is negligible) In light of these assumptions, theoretical claims like *learning the networks with the cross-entropy loss makes their (normalized) penultimate activations follow a von Mises-Fisher distribution for each class* in the abstract are false; even if the normalization assumptions are true, this is an approximate result, due to 1) no guarantee that the KL-divergence will be zero and 2) the entropy term in Equation 6. Additionally, Equation 6 is only applied to the numerator of the objective in Equation 4, so the objectives aren*t equivalent. These results are thus necessarily based on approximations and some heuristics. This is not inherently a problem, but I would argue that it necessitates that experiments prove these assumptions to be useful. I find Table 1 to be convincing of the assumption that the weight and activation normalizations are negligible. Figure 1 is also a nice visualization for arguing that penultimate activations are approximately vMF. The MNIST experiment in Table 4 is a red flag, since the presented results do not match the results in the HVAE paper. The paper claims the experimental setup is identical to the HVAE paper, so it is concerning that the results do not match, especially since the results in the HVAE paper appear to outperform the proposed method. It is possible that the results in the original paper are not reproducible or that the experiments differ, but this should be stated in the paper, since the natural assumption would be that the baseline was not adequately reproduced. Small comment: In Table 2 it is not clear from the figure or the caption that CKD is the method the paper is proposing. It would also make sense to bold the best performing method since you do that for the other tables in the paper. I like the paper and think the results are interesting, but because they are based on heuristics and assumptions, the paper currently promises too much. This puts the burden on presenting experiments that are convincing of improvement, which is not currently the case with the HVAE experiment. I would be open to changing my score if these reviews are addressed.", "role": "assistant" } ]
[ "This paper studies the distribution of the second-to-last layers of neural networks.", "The paper argues that, under certain assumptions, minimizing the standard cross-entropy loss of a classification task corresponds to minimizing the KL divergence between the distribution of the penultimate activations and a von Mises-Fisher (vMF) distribution.", "They assess their assumptions through experiments, and use this interpretation of the penultimate activations as vMF distributions on two tasks: knowledge distillation and class-conditional image generation.", "They argue that this interpretation outperforms other knowledge distillation methods under domain shift, and report a similar success for class conditional image generation.", "This is an important problem, especially since these penultimate activations have become more frequently used as object representations in downstream tasks.", "Their result could also help us understand better interpretations of neural networks.", "The writing is clear throughout, and it*s straightforward to read.", "The results are based on assumptions that I will try to list below: * normalizing the penultimate activations does not affect classification performance * the class-specific vectors in the final weight matrix have constant normalizations * minimizing cross-entropy loss is equivalent to minimizing the KL divergence between the penultimate activation distribution and a vMF distribution (i.e.", "the entropy term in equation 6 is negligible) In light of these assumptions, theoretical claims like *learning the networks with the cross-entropy loss makes their (normalized) penultimate activations follow a von Mises-Fisher distribution for each class* in the abstract are false; even if the normalization assumptions are true, this is an approximate result, due to 1) no guarantee that the KL-divergence will be zero and 2) the entropy term in Equation 6.", "Additionally, Equation 6 is only applied to the numerator of the objective in Equation 4, so the objectives aren*t equivalent.", "These results are thus necessarily based on approximations and some heuristics.", "This is not inherently a problem, but I would argue that it necessitates that experiments prove these assumptions to be useful.", "I find Table 1 to be convincing of the assumption that the weight and activation normalizations are negligible.", "Figure 1 is also a nice visualization for arguing that penultimate activations are approximately vMF.", "The MNIST experiment in Table 4 is a red flag, since the presented results do not match the results in the HVAE paper.", "The paper claims the experimental setup is identical to the HVAE paper, so it is concerning that the results do not match, especially since the results in the HVAE paper appear to outperform the proposed method.", "It is possible that the results in the original paper are not reproducible or that the experiments differ, but this should be stated in the paper, since the natural assumption would be that the baseline was not adequately reproduced.", "Small comment: In Table 2 it is not clear from the figure or the caption that CKD is the method the paper is proposing.", "It would also make sense to bold the best performing method since you do that for the other tables in the paper.", "I like the paper and think the results are interesting, but because they are based on heuristics and assumptions, the paper currently promises too much.", "This puts the burden on presenting experiments that are convincing of improvement, which is not currently the case with the HVAE experiment.", "I would be open to changing my score if these reviews are addressed." ]
{ "criticism": 6, "example": 5, "importance_and_relevance": 2, "materials_and_methods": 15, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 13, "suggestion_and_solution": 5, "total": 22 }
0.272727
0.227273
0.090909
0.681818
0.136364
0.136364
0.590909
0.227273
22
1
1
1
1
1
1
1
1
2.363636
1.08526
1.278377
Extreme Language Model Compression with Optimal Subwords and Shared Projections
Pre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks. However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices. In particular, the input word embedding matrix accounts for a significant proportion of the model*s memory footprint, due to the large input vocabulary and embedding dimensions. Knowledge distillation techniques have had success at compressing large neural network models, but they are ineffective at yielding student models with vocabularies different from the original teacher models. We introduce a novel knowledge distillation technique for training a student model with a significantly smaller vocabulary as well as lower embedding and hidden state dimensions. Specifically, we employ a dual-training mechanism that trains the teacher and student models simultaneously to obtain optimal word embeddings for the student vocabulary. We combine this approach with learning shared projection matrices that transfer layer-wise knowledge from the teacher model to the student model. Our method is able to compress the BERT-BASE model by more than 60x, with only a minor drop in downstream task metrics, resulting in a language model with a footprint of under 7MB. Experimental results also demonstrate higher compression efficiency and accuracy when compared with other state-of-the-art compression techniques.
This paper proposes techniques for improving the compression rate of Knowledge Distillation on BERT models. There are two main ideas. The first is the reduction of the parameter by reducing the vocabulary size of WordPiece model. The observation that a large fraction of parameters in BERT are dedicated to WordPiece embeddings is interesting, as this hasn*t received much attention in other works on BERT compression. The second idea is to add a loss function that measures the difference of parameters between teacher and student in the learned projection space. These two ideas are quite complementary in that one can apply one idea to vanilla knowledge distillation without having to apply another simultaneously. My major concern on this paper is that experiments are not designed to measure the impact of each design choice. First, it is unclear whether DualTrain-based methods outperform PKD with 3 layers because having more layers is more advantageous than having more hidden dimensions. While PKD doesn*t straightforwardly allow different hidden dimensions between teacher and student, it is a reasonably straightforward idea to train a shared projection matrix to enable it; concurrent work https://openreview.net/forum?id=rJx0Q6EFPB actually does that. If this was considered as too significant extension of PKD, then authors should*ve experimented with vanilla knowledge distillation with fewer hidden dimensions. Second, DualTrain-based models have both smaller vocabulary size and smaller hidden dimension size, and it is unclear which one played the greater role in successful distillation. I would suggest authors to carefully set up baselines that we shall precisely measure the impact of each design choice. Vanilla knowledge distillation with the same vocabulary as teacher and smaller hidden units shall allow us to understand the impact of decreasing hidden units. By comparing this with DualTrain models with the same vocabulary size, we shall now understand the benefit of decreasing the WordPiece vocabulary size. Authors make a very insightful note in the discussion that smaller WordPiece vocabulary would imply longer sequence lengths, which may degrade the performance of BERT models. This is an interesting observation, and authors should not avoid running experiments on tasks with longer sequences because it is likely to have negative results. By actually running experiments and breaking down metrics with different sequence length buckets, the research community shall better understand the impact of vocabulary size and the paper will be made stronger. Minor comments: 1) Table 3 is somewhat difficult to understand because each approach is making a different tradeoff between accuracy and model size. Plotting the accuracy metric vs. model size curve may more effectively visualize the tradeoff. 2) I don*t think the variable C in equation 3 was defined. Since there are two vocabularies, it might be useful to clarify the variable and clarify whether there is a loss function for words in teacher vocabulary.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExtreme Language Model Compression with Optimal Subwords and Shared Projections\nPre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks. However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices. In particular, the input word embedding matrix accounts for a significant proportion of the model*s memory footprint, due to the large input vocabulary and embedding dimensions. Knowledge distillation techniques have had success at compressing large neural network models, but they are ineffective at yielding student models with vocabularies different from the original teacher models. We introduce a novel knowledge distillation technique for training a student model with a significantly smaller vocabulary as well as lower embedding and hidden state dimensions. Specifically, we employ a dual-training mechanism that trains the teacher and student models simultaneously to obtain optimal word embeddings for the student vocabulary. We combine this approach with learning shared projection matrices that transfer layer-wise knowledge from the teacher model to the student model. Our method is able to compress the BERT-BASE model by more than 60x, with only a minor drop in downstream task metrics, resulting in a language model with a footprint of under 7MB. Experimental results also demonstrate higher compression efficiency and accuracy when compared with other state-of-the-art compression techniques.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes techniques for improving the compression rate of Knowledge Distillation on BERT models. There are two main ideas. The first is the reduction of the parameter by reducing the vocabulary size of WordPiece model. The observation that a large fraction of parameters in BERT are dedicated to WordPiece embeddings is interesting, as this hasn*t received much attention in other works on BERT compression. The second idea is to add a loss function that measures the difference of parameters between teacher and student in the learned projection space. These two ideas are quite complementary in that one can apply one idea to vanilla knowledge distillation without having to apply another simultaneously. My major concern on this paper is that experiments are not designed to measure the impact of each design choice. First, it is unclear whether DualTrain-based methods outperform PKD with 3 layers because having more layers is more advantageous than having more hidden dimensions. While PKD doesn*t straightforwardly allow different hidden dimensions between teacher and student, it is a reasonably straightforward idea to train a shared projection matrix to enable it; concurrent work https://openreview.net/forum?id=rJx0Q6EFPB actually does that. If this was considered as too significant extension of PKD, then authors should*ve experimented with vanilla knowledge distillation with fewer hidden dimensions. Second, DualTrain-based models have both smaller vocabulary size and smaller hidden dimension size, and it is unclear which one played the greater role in successful distillation. I would suggest authors to carefully set up baselines that we shall precisely measure the impact of each design choice. Vanilla knowledge distillation with the same vocabulary as teacher and smaller hidden units shall allow us to understand the impact of decreasing hidden units. By comparing this with DualTrain models with the same vocabulary size, we shall now understand the benefit of decreasing the WordPiece vocabulary size. Authors make a very insightful note in the discussion that smaller WordPiece vocabulary would imply longer sequence lengths, which may degrade the performance of BERT models. This is an interesting observation, and authors should not avoid running experiments on tasks with longer sequences because it is likely to have negative results. By actually running experiments and breaking down metrics with different sequence length buckets, the research community shall better understand the impact of vocabulary size and the paper will be made stronger. Minor comments: 1) Table 3 is somewhat difficult to understand because each approach is making a different tradeoff between accuracy and model size. Plotting the accuracy metric vs. model size curve may more effectively visualize the tradeoff. 2) I don*t think the variable C in equation 3 was defined. Since there are two vocabularies, it might be useful to clarify the variable and clarify whether there is a loss function for words in teacher vocabulary.", "role": "assistant" } ]
[ "This paper proposes techniques for improving the compression rate of Knowledge Distillation on BERT models.", "There are two main ideas.", "The first is the reduction of the parameter by reducing the vocabulary size of WordPiece model.", "The observation that a large fraction of parameters in BERT are dedicated to WordPiece embeddings is interesting, as this hasn*t received much attention in other works on BERT compression.", "The second idea is to add a loss function that measures the difference of parameters between teacher and student in the learned projection space.", "These two ideas are quite complementary in that one can apply one idea to vanilla knowledge distillation without having to apply another simultaneously.", "My major concern on this paper is that experiments are not designed to measure the impact of each design choice.", "First, it is unclear whether DualTrain-based methods outperform PKD with 3 layers because having more layers is more advantageous than having more hidden dimensions.", "While PKD doesn*t straightforwardly allow different hidden dimensions between teacher and student, it is a reasonably straightforward idea to train a shared projection matrix to enable it; concurrent work https://openreview.net/forum?id=rJx0Q6EFPB actually does that.", "If this was considered as too significant extension of PKD, then authors should*ve experimented with vanilla knowledge distillation with fewer hidden dimensions.", "Second, DualTrain-based models have both smaller vocabulary size and smaller hidden dimension size, and it is unclear which one played the greater role in successful distillation.", "I would suggest authors to carefully set up baselines that we shall precisely measure the impact of each design choice.", "Vanilla knowledge distillation with the same vocabulary as teacher and smaller hidden units shall allow us to understand the impact of decreasing hidden units.", "By comparing this with DualTrain models with the same vocabulary size, we shall now understand the benefit of decreasing the WordPiece vocabulary size.", "Authors make a very insightful note in the discussion that smaller WordPiece vocabulary would imply longer sequence lengths, which may degrade the performance of BERT models.", "This is an interesting observation, and authors should not avoid running experiments on tasks with longer sequences because it is likely to have negative results.", "By actually running experiments and breaking down metrics with different sequence length buckets, the research community shall better understand the impact of vocabulary size and the paper will be made stronger.", "Minor comments: 1) Table 3 is somewhat difficult to understand because each approach is making a different tradeoff between accuracy and model size.", "Plotting the accuracy metric vs. model size curve may more effectively visualize the tradeoff.", "2) I don*t think the variable C in equation 3 was defined.", "Since there are two vocabularies, it might be useful to clarify the variable and clarify whether there is a loss function for words in teacher vocabulary." ]
{ "criticism": 7, "example": 1, "importance_and_relevance": 3, "materials_and_methods": 17, "praise": 3, "presentation_and_reporting": 9, "results_and_discussion": 3, "suggestion_and_solution": 10, "total": 21 }
0.333333
0.047619
0.142857
0.809524
0.142857
0.428571
0.142857
0.47619
21
1
1
1
1
1
1
1
1
2.52381
1.513734
1.010076
Towards Effective and Efficient Zero-shot Learning by Fine-tuning with Task Descriptions
While current machine learning models have achieved great success with labeled data, we have to deal with classes that have little or no training data in many real-world applications. This leads to the study of zero-shot learning. The typical approach in zero-shot learning is to embed seen and unseen classes into a shared space using class meta-data, and construct classifiers on top of that. Yet previous methods either still require significant manual labor in obtaining useful meta-data, or utilize automatically collected meta-data while trading in performance. To achieve satisfactory performance under practical meta-data efficiency constraint, we propose extbf{N extsuperscript{3}} ( extbf{N}eural extbf{N}etworks from extbf{N}atural Language), a meta-model that maps natural language class descriptions to corresponding neural network classifiers. N extsuperscript{3} leverages readily available online documents combined with pretrained language representations such as BERT to obtain expressive class embeddings. In addition, N extsuperscript{3} generates parameter adaptations for pretrained neural networks using these class embeddings, effectively ``finetuneing** the network to classify unseen classes. Our experiments show that N extsuperscript{3} is able to outperform previous methods across 8 different benchmark evaluations and we show through ablation studies the contribution of each model component. To offer insight into how N extsuperscript{3} ``finetunes** the pretrained network, we also performed a range of qualitative and quantitative analysis. Our code will be released after the review period.
Summary: This paper proposes a meta-model called Neural Networks from Natural language (N^3) for zero-shot learning. This model maps a set of class object descriptions to neural network parameters to create zero-shot classifiers, which contains two modules: (1) SEM encodes class descriptions into corresponding embeddings; (2) PAM fine-tunes the generated parameters in the SEM. The authors conduct experiments on 4 popular benchmark datasets and achieve best results compared with baselines. Strengths: (1) This paper proposes a good Parameter Adaptation Module (PAM) to generate parameter adaptations to fine-tune all layers in the pretrained network based on the class embeddings, to classify unseen classes. (2) This paper achieves good results compared with baselines. Weaknesses: (1) I am not sure the effectiveness of the N^3 according to the results in Table 3, 4, because the BERT module may play a major role. (2) Although PDCNN (2015) and GAZSL (2017) are very strong baselines, this paper does not compare the work published in 2018 or 2019. (3) There are many grammatical mistakes in this paper. I have some following major concerns about the paper: (1) This paper conducts extensive experiments to demonstrate the performance of N^3 in Table 2, but why not compare N^3 with the work published in 2018 or 2019? (2) In Table 3, this paper studies what extent does the superior performance of N^3 depend on gamma and mu, but why not show the results with gamma =1, mu=0 to demonstrate the effectiveness of PAM or fine-tuning? (3) In Table 4, I can come to a conclusion that BERT plays a great role in your model. To better demonstrate the effectiveness of N^3, I think you should show the accuracy of the baselines and N^3 with ELMo and Glove embeddings separately in Table 2. (4) This paper does not explain the results clearly such as Figure 5 (a) and (b). (5) There are many grammatical mistakes in this paper, such as the sentence “we also perform ...... and demonstrated ......” in Section 1.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTowards Effective and Efficient Zero-shot Learning by Fine-tuning with Task Descriptions\nWhile current machine learning models have achieved great success with labeled data, we have to deal with classes that have little or no training data in many real-world applications. This leads to the study of zero-shot learning. The typical approach in zero-shot learning is to embed seen and unseen classes into a shared space using class meta-data, and construct classifiers on top of that. Yet previous methods either still require significant manual labor in obtaining useful meta-data, or utilize automatically collected meta-data while trading in performance. To achieve satisfactory performance under practical meta-data efficiency constraint, we propose \textbf{N\textsuperscript{3}} (\textbf{N}eural \textbf{N}etworks from \textbf{N}atural Language), a meta-model that maps natural language class descriptions to corresponding neural network classifiers. N\textsuperscript{3} leverages readily available online documents combined with pretrained language representations such as BERT to obtain expressive class embeddings. In addition, N\textsuperscript{3} generates parameter adaptations for pretrained neural networks using these class embeddings, effectively ``finetuneing** the network to classify unseen classes. Our experiments show that N\textsuperscript{3} is able to outperform previous methods across 8 different benchmark evaluations and we show through ablation studies the contribution of each model component. To offer insight into how N\textsuperscript{3} ``finetunes** the pretrained network, we also performed a range of qualitative and quantitative analysis. Our code will be released after the review period.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary: This paper proposes a meta-model called Neural Networks from Natural language (N^3) for zero-shot learning. This model maps a set of class object descriptions to neural network parameters to create zero-shot classifiers, which contains two modules: (1) SEM encodes class descriptions into corresponding embeddings; (2) PAM fine-tunes the generated parameters in the SEM. The authors conduct experiments on 4 popular benchmark datasets and achieve best results compared with baselines. Strengths: (1) This paper proposes a good Parameter Adaptation Module (PAM) to generate parameter adaptations to fine-tune all layers in the pretrained network based on the class embeddings, to classify unseen classes. (2) This paper achieves good results compared with baselines. Weaknesses: (1) I am not sure the effectiveness of the N^3 according to the results in Table 3, 4, because the BERT module may play a major role. (2) Although PDCNN (2015) and GAZSL (2017) are very strong baselines, this paper does not compare the work published in 2018 or 2019. (3) There are many grammatical mistakes in this paper. I have some following major concerns about the paper: (1) This paper conducts extensive experiments to demonstrate the performance of N^3 in Table 2, but why not compare N^3 with the work published in 2018 or 2019? (2) In Table 3, this paper studies what extent does the superior performance of N^3 depend on gamma and mu, but why not show the results with gamma =1, mu=0 to demonstrate the effectiveness of PAM or fine-tuning? (3) In Table 4, I can come to a conclusion that BERT plays a great role in your model. To better demonstrate the effectiveness of N^3, I think you should show the accuracy of the baselines and N^3 with ELMo and Glove embeddings separately in Table 2. (4) This paper does not explain the results clearly such as Figure 5 (a) and (b). (5) There are many grammatical mistakes in this paper, such as the sentence “we also perform ...... and demonstrated ......” in Section 1.", "role": "assistant" } ]
[ "Summary: This paper proposes a meta-model called Neural Networks from Natural language (N^3) for zero-shot learning.", "This model maps a set of class object descriptions to neural network parameters to create zero-shot classifiers, which contains two modules: (1) SEM encodes class descriptions into corresponding embeddings; (2) PAM fine-tunes the generated parameters in the SEM.", "The authors conduct experiments on 4 popular benchmark datasets and achieve best results compared with baselines.", "Strengths: (1) This paper proposes a good Parameter Adaptation Module (PAM) to generate parameter adaptations to fine-tune all layers in the pretrained network based on the class embeddings, to classify unseen classes.", "(2) This paper achieves good results compared with baselines.", "Weaknesses: (1) I am not sure the effectiveness of the N^3 according to the results in Table 3, 4, because the BERT module may play a major role.", "(2) Although PDCNN (2015) and GAZSL (2017) are very strong baselines, this paper does not compare the work published in 2018 or 2019.", "(3) There are many grammatical mistakes in this paper.", "I have some following major concerns about the paper: (1) This paper conducts extensive experiments to demonstrate the performance of N^3 in Table 2, but why not compare N^3 with the work published in 2018 or 2019?", "(2) In Table 3, this paper studies what extent does the superior performance of N^3 depend on gamma and mu, but why not show the results with gamma =1, mu=0 to demonstrate the effectiveness of PAM or fine-tuning?", "(3) In Table 4, I can come to a conclusion that BERT plays a great role in your model.", "To better demonstrate the effectiveness of N^3, I think you should show the accuracy of the baselines and N^3 with ELMo and Glove embeddings separately in Table 2.", "(4) This paper does not explain the results clearly such as Figure 5 (a) and (b).", "(5) There are many grammatical mistakes in this paper, such as the sentence “we also perform ...... and demonstrated ......” in Section 1." ]
{ "criticism": 5, "example": 3, "importance_and_relevance": 1, "materials_and_methods": 11, "praise": 3, "presentation_and_reporting": 5, "results_and_discussion": 5, "suggestion_and_solution": 3, "total": 14 }
0.357143
0.214286
0.071429
0.785714
0.214286
0.357143
0.357143
0.214286
14
1
1
1
1
1
1
1
1
2.571429
2.555646
0.015782
TransINT: Embedding Implication Rules in Knowledge Graphs with Isomorphic Intersections of Linear Subspaces
Knowledge Graphs (KG), composed of entities and relations, provide a structured representation of knowledge. For easy access to statistical approaches on relational data, multiple methods to embed a KG as components of R^d have been introduced. We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space. TransINT maps set of entities (tied by a relation) to continuous sets of vectors that are inclusion-ordered isomorphically to relation implications. With a novel parameter sharing scheme, TransINT enables automatic training on missing but implied facts without rule grounding. We achieve new state-of-the-art performances with signficant margins in Link Prediction and Triple Classification on FB122 dataset, with boosted performance even on test instances that cannot be inferred by logical rules. The angles between the continuous sets embedded by TransINT provide an interpretable way to mine semantic relatedness and implication rules among relations.
The authors propose TransINT, a novel graph embedding method that builds upon methods such as TransH (Knowledge graph embedding by translating on hyperplanes, Wang et al 2014), TransE (Translating Embeddings for Modeling Multi-relational Data, Bordes et al 2013), and appears to offer state of the art performance on the Freebase link prediction benchmark. The manuscript is OK overall in terms of writing (although it could use some copy editing for grammar) and relevant to this community. This reviewer is not familiar with knowledge graph embeddings, and found it difficult to follow the main ideas and contributions. The introduction could certainly be written in a clearer way, first stating the general problem and then building up from existing solutions to the proposed method. Many of the figure captures seem to contain typos (details below), making it particularly confusing to follow. The results appear impressive, but I will leave it to reviewers with more experience in the field to judge how it stacks up against recent work. Minor comments: - In the first paragraph, there are two instances where closing brackets have been replaced by opening brackets. - Section 2, where transH is mentioned for the first time, it*s missing a reference - Incomplete sentence *Let ri be a binary relation x, y entities*. - Figure 2: h,t are not defined. is_family is mislabled as is_parent -Figure 3: The line for is_parent_of is mislabeled as is_mother of. - Section 5, *We evaluate TransINT on Freebase 122 (respectively created by ...*: What do you mean by *respectively* if there only Freebase 122 and no other dataset? Maybe forgot KALE here, it*s mentioned a few sentences later for the first time and never defined.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTransINT: Embedding Implication Rules in Knowledge Graphs with Isomorphic Intersections of Linear Subspaces\nKnowledge Graphs (KG), composed of entities and relations, provide a structured representation of knowledge. For easy access to statistical approaches on relational data, multiple methods to embed a KG as components of R^d have been introduced. We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space. TransINT maps set of entities (tied by a relation) to continuous sets of vectors that are inclusion-ordered isomorphically to relation implications. With a novel parameter sharing scheme, TransINT enables automatic training on missing but implied facts without rule grounding. We achieve new state-of-the-art performances with signficant margins in Link Prediction and Triple Classification on FB122 dataset, with boosted performance even on test instances that cannot be inferred by logical rules. The angles between the continuous sets embedded by TransINT provide an interpretable way to mine semantic relatedness and implication rules among relations.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose TransINT, a novel graph embedding method that builds upon methods such as TransH (Knowledge graph embedding by translating on hyperplanes, Wang et al 2014), TransE (Translating Embeddings for Modeling Multi-relational Data, Bordes et al 2013), and appears to offer state of the art performance on the Freebase link prediction benchmark. The manuscript is OK overall in terms of writing (although it could use some copy editing for grammar) and relevant to this community. This reviewer is not familiar with knowledge graph embeddings, and found it difficult to follow the main ideas and contributions. The introduction could certainly be written in a clearer way, first stating the general problem and then building up from existing solutions to the proposed method. Many of the figure captures seem to contain typos (details below), making it particularly confusing to follow. The results appear impressive, but I will leave it to reviewers with more experience in the field to judge how it stacks up against recent work. Minor comments: - In the first paragraph, there are two instances where closing brackets have been replaced by opening brackets. - Section 2, where transH is mentioned for the first time, it*s missing a reference - Incomplete sentence *Let ri be a binary relation x, y entities*. - Figure 2: h,t are not defined. is_family is mislabled as is_parent -Figure 3: The line for is_parent_of is mislabeled as is_mother of. - Section 5, *We evaluate TransINT on Freebase 122 (respectively created by ...*: What do you mean by *respectively* if there only Freebase 122 and no other dataset? Maybe forgot KALE here, it*s mentioned a few sentences later for the first time and never defined.", "role": "assistant" } ]
[ "The authors propose TransINT, a novel graph embedding method that builds upon methods such as TransH (Knowledge graph embedding by translating on hyperplanes, Wang et al 2014), TransE (Translating Embeddings for Modeling Multi-relational Data, Bordes et al 2013), and appears to offer state of the art performance on the Freebase link prediction benchmark.", "The manuscript is OK overall in terms of writing (although it could use some copy editing for grammar) and relevant to this community.", "This reviewer is not familiar with knowledge graph embeddings, and found it difficult to follow the main ideas and contributions.", "The introduction could certainly be written in a clearer way, first stating the general problem and then building up from existing solutions to the proposed method.", "Many of the figure captures seem to contain typos (details below), making it particularly confusing to follow.", "The results appear impressive, but I will leave it to reviewers with more experience in the field to judge how it stacks up against recent work.", "Minor comments: - In the first paragraph, there are two instances where closing brackets have been replaced by opening brackets.", "- Section 2, where transH is mentioned for the first time, it*s missing a reference - Incomplete sentence *Let ri be a binary relation x, y entities*.", "- Figure 2: h,t are not defined.", "is_family is mislabled as is_parent -Figure 3: The line for is_parent_of is mislabeled as is_mother of.", "- Section 5, *We evaluate TransINT on Freebase 122 (respectively created by ...*: What do you mean by *respectively* if there only Freebase 122 and no other dataset?", "Maybe forgot KALE here, it*s mentioned a few sentences later for the first time and never defined." ]
{ "criticism": 3, "example": 6, "importance_and_relevance": 2, "materials_and_methods": 3, "praise": 2, "presentation_and_reporting": 8, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 12 }
0.25
0.5
0.166667
0.25
0.166667
0.666667
0.166667
0.166667
12
1
1
1
1
1
1
1
1
2.333333
2.317551
0.015782
Utility Analysis of Network Architectures for 3D Point Cloud Processing
In this paper, we diagnose deep neural networks for 3D point cloud processing to explore the utility of different network architectures. We propose a number of hypotheses on the effects of specific network architectures on the representation capacity of DNNs. In order to prove the hypotheses, we design five metrics to diagnose various types of DNNs from the following perspectives, information discarding, information concentration, rotation robustness, adversarial robustness, and neighborhood inconsistency. We conduct comparative studies based on such metrics to verify the hypotheses, which may shed new lights on the architectural design of neural networks. Experiments demonstrated the effectiveness of our method. The code will be released when this paper is accepted.
The paper addresses the task of evaluation of point-based architectures for deep learning, aiming at establishing the contribution of various architectural blocks (e.g., local density reweighting) to a number of desirable properties of the resulting models (such as adversarial robustness, rotation robustness, etc.). To assess the influence of such blocks on the resulting models, authors formulate and test a variety hypotheses while measuring model performance using a number of existing measures, such as information discarding/concentration, etc. I believe the paper cannot be accepted in its present form, as (1) no clear motivation for the work is presented, (2) the problem statement and the research questions are very blurry, and consequently (3) the results do not provide significant new knowledge related to the point-based neural architectures. While the specific architectural choices do influence the characteristics of commonly used models and a systematic evaluation of the former is indeed useful for practice, the paper does not indicate a particular application context, which I believe is crucial to the entire following narrative. Do the authors aim at evaluating in the recognition/labeling context? In the synthesis context (e.g., novel shape synthesis, completion, or upsampling)? In the geometry processing context (normal estimation, patch decomposition, or denoising)? Would the claimed hypothesis, let alone the results, even be formulated the same way, once one switches to a different class of applications? If yes, it would be hard to understand from the paper. If no, this could potentially diminish the value of the entire work significantly. If the paper targeted a particular class of applications or simply a specific task, the hypotheses might have looked more natural; however, currently I find the motivation for these hypotheses rather vague. New insights into neural architectures or other models could emerge from (empirical and/or theoretical) regularities revealed by a purposeful experiment; however, the authors perform a seemingly random evaluation of point-based architectures which is of a fairly limited value per se. The reader might wonder: what is the exact problem that the study aims to address? Thus, I believe, a clarification of the problem statement and the basic research questions for the entire work is needed. The paper does a good job of thoroughly presenting results of numerous computational experiments and values of the selected measures. However, as stated before, the results fail to provide significant insights into point-based neural architectures due to (1) lack of specific applications the authors are trying to solve and (2) arbitrariness of the chosen experiments lead to equally arbitrary conclusions. If “architecture of We et al., which uses local density information, improves adversarial robustness”, what difference would it make? Would it open novel opportunities for point-based deep nets, and if yes, which exactly? One minor additions is that since the paper focuses on empirical evaluation of machine learning models, significance measures such a statistical confidence intervals would be very useful to present in the results.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUtility Analysis of Network Architectures for 3D Point Cloud Processing\nIn this paper, we diagnose deep neural networks for 3D point cloud processing to explore the utility of different network architectures. We propose a number of hypotheses on the effects of specific network architectures on the representation capacity of DNNs. In order to prove the hypotheses, we design five metrics to diagnose various types of DNNs from the following perspectives, information discarding, information concentration, rotation robustness, adversarial robustness, and neighborhood inconsistency. We conduct comparative studies based on such metrics to verify the hypotheses, which may shed new lights on the architectural design of neural networks. Experiments demonstrated the effectiveness of our method. The code will be released when this paper is accepted.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper addresses the task of evaluation of point-based architectures for deep learning, aiming at establishing the contribution of various architectural blocks (e.g., local density reweighting) to a number of desirable properties of the resulting models (such as adversarial robustness, rotation robustness, etc.). To assess the influence of such blocks on the resulting models, authors formulate and test a variety hypotheses while measuring model performance using a number of existing measures, such as information discarding/concentration, etc. I believe the paper cannot be accepted in its present form, as (1) no clear motivation for the work is presented, (2) the problem statement and the research questions are very blurry, and consequently (3) the results do not provide significant new knowledge related to the point-based neural architectures. While the specific architectural choices do influence the characteristics of commonly used models and a systematic evaluation of the former is indeed useful for practice, the paper does not indicate a particular application context, which I believe is crucial to the entire following narrative. Do the authors aim at evaluating in the recognition/labeling context? In the synthesis context (e.g., novel shape synthesis, completion, or upsampling)? In the geometry processing context (normal estimation, patch decomposition, or denoising)? Would the claimed hypothesis, let alone the results, even be formulated the same way, once one switches to a different class of applications? If yes, it would be hard to understand from the paper. If no, this could potentially diminish the value of the entire work significantly. If the paper targeted a particular class of applications or simply a specific task, the hypotheses might have looked more natural; however, currently I find the motivation for these hypotheses rather vague. New insights into neural architectures or other models could emerge from (empirical and/or theoretical) regularities revealed by a purposeful experiment; however, the authors perform a seemingly random evaluation of point-based architectures which is of a fairly limited value per se. The reader might wonder: what is the exact problem that the study aims to address? Thus, I believe, a clarification of the problem statement and the basic research questions for the entire work is needed. The paper does a good job of thoroughly presenting results of numerous computational experiments and values of the selected measures. However, as stated before, the results fail to provide significant insights into point-based neural architectures due to (1) lack of specific applications the authors are trying to solve and (2) arbitrariness of the chosen experiments lead to equally arbitrary conclusions. If “architecture of We et al., which uses local density information, improves adversarial robustness”, what difference would it make? Would it open novel opportunities for point-based deep nets, and if yes, which exactly? One minor additions is that since the paper focuses on empirical evaluation of machine learning models, significance measures such a statistical confidence intervals would be very useful to present in the results.", "role": "assistant" } ]
[ "The paper addresses the task of evaluation of point-based architectures for deep learning, aiming at establishing the contribution of various architectural blocks (e.g., local density reweighting) to a number of desirable properties of the resulting models (such as adversarial robustness, rotation robustness, etc.).", "To assess the influence of such blocks on the resulting models, authors formulate and test a variety hypotheses while measuring model performance using a number of existing measures, such as information discarding/concentration, etc.", "I believe the paper cannot be accepted in its present form, as (1) no clear motivation for the work is presented, (2) the problem statement and the research questions are very blurry, and consequently (3) the results do not provide significant new knowledge related to the point-based neural architectures.", "While the specific architectural choices do influence the characteristics of commonly used models and a systematic evaluation of the former is indeed useful for practice, the paper does not indicate a particular application context, which I believe is crucial to the entire following narrative.", "Do the authors aim at evaluating in the recognition/labeling context?", "In the synthesis context (e.g., novel shape synthesis, completion, or upsampling)?", "In the geometry processing context (normal estimation, patch decomposition, or denoising)?", "Would the claimed hypothesis, let alone the results, even be formulated the same way, once one switches to a different class of applications?", "If yes, it would be hard to understand from the paper.", "If no, this could potentially diminish the value of the entire work significantly.", "If the paper targeted a particular class of applications or simply a specific task, the hypotheses might have looked more natural; however, currently I find the motivation for these hypotheses rather vague.", "New insights into neural architectures or other models could emerge from (empirical and/or theoretical) regularities revealed by a purposeful experiment; however, the authors perform a seemingly random evaluation of point-based architectures which is of a fairly limited value per se.", "The reader might wonder: what is the exact problem that the study aims to address?", "Thus, I believe, a clarification of the problem statement and the basic research questions for the entire work is needed.", "The paper does a good job of thoroughly presenting results of numerous computational experiments and values of the selected measures.", "However, as stated before, the results fail to provide significant insights into point-based neural architectures due to (1) lack of specific applications the authors are trying to solve and (2) arbitrariness of the chosen experiments lead to equally arbitrary conclusions.", "If “architecture of We et al., which uses local density information, improves adversarial robustness”, what difference would it make?", "Would it open novel opportunities for point-based deep nets, and if yes, which exactly?", "One minor additions is that since the paper focuses on empirical evaluation of machine learning models, significance measures such a statistical confidence intervals would be very useful to present in the results." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 10, "materials_and_methods": 12, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 9, "suggestion_and_solution": 3, "total": 19 }
0.210526
0
0.526316
0.631579
0.052632
0.105263
0.473684
0.157895
19
1
0
1
1
1
1
1
1
2.157895
1.589727
0.568167
Learning an off-policy predictive state representation for deep reinforcement learning for vision-based steering in autonomous driving
An algorithm is introduced for learning a predictive state representation with off-policy temporal difference (TD) learning that is then used to learn to steer a vehicle with reinforcement learning. There are three components being learned simultaneously: (1) the off-policy predictions as a compact representation of state, (2) the behavior policy distribution for estimating the off-policy predictions, and (3) the deterministic policy gradient for learning to act. A behavior policy discriminator is learned and used for estimating the important sampling ratios needed to learn the predictive representation off-policy with general value functions (GVFs). A linear deterministic policy gradient method is used to train the agent with only the predictive representations while the predictions are being learned. All three components are combined, demonstrated and evaluated on the problem of steering the vehicle from images in the TORCS racing simulator environment. Steering from only images is a challenging problem where evaluation is completed on a held-out set of tracks that were never seen during training in order to measure the generalization of the predictions and controller. Experiments show the proposed method is able to steer smoothly and navigate many but not all of the tracks available in TORCS with performance that exceeds DDPG using only images as input and approaches the performance of an ideal non-vision based kinematics model.
This paper considers the usage of learnt general value functions (GVFs) as representations for downstream control tasks in autonomous driving. The authors learn GVFs for hand-defined cumulant functions from off-policy data using TD-learning, using estimated importance weights to keep updates unbiased. A predictive representation (forecast) is chosen by the authors, which measures the “lane-centeredness” and “road angle” of the car at various effective horizons. The authors perform an experimental evaluation on TORCS using images, where they demonstrate that control policies learnt using such a predictive representation generalize better than policies which learn end-to-end directly from images. The use and discovery of predictive representations for RL, especially in the deep setting, is an interesting and important problem, relatively under-explored in the deep RL setting. The paper presents little technical novelty, however, the success of learning systems to perform control with predictive representations in this autonomous driving setting is of interest to the ICLR community. However, the paper suffers from lack of clarity and in presentation, in exposition and in the experimental section. Furthermore, I believe the experimental evaluation doesn’t shed light on the performance of the predictive representation estimation, and leaves questions open as to whether the representation improves generalization or alleviates training problems. With these concerns in hand, I believe this paper holds promise, but in its current state is not yet ready for publication at ICLR. Technical Contribution: The use of predictive state representations / forecasts is to my knowledge a relatively under-explored setting in deep RL, so the success of such representations for autonomous driving is of interest. The primary technical contribution is techniques to use a continually learnt value function as the representation for a downstream policy. The adversarial density estimation procedure introduced in the paper is not novel, and has been studied previously extensively in the community, especially in inverse RL and exploration. Presentation / Clarity: The presentation of the paper can be much improved, in detail and choice of language. Figures and their corresponding captions in Section 4 are difficult to parse. Most figures lack a legend, some lack axis labels, and captions are often non-informative. I found the exposition unclear / confusing at times, and would recommend the authors to revise the paper to improve readability in general, with focus on the introduction and experimental section. Experimental Evaluation: The experimental evaluation is missing an analysis of the learnt predictive representation (GVFs). In particular, I would like to see a stronger analysis investigating whether the learnt value functions are accurate, how fast /effectively these representations adapt to changes in the policy, how accurate the estimated density function is (using the adversarial technique), and whether this importance sampling term is necessary in practice for GVF estimation (as it seems that the experiments are run in a more online fashion). An analysis of this form would lend greater support to the argument that the proposed policy evaluation scheme is effective and useful - and that the predictive representation is actually predicting the desired quantities. No training curves are indicated anywhere, so I also have remaining questions as to whether the gain in performance presented by the authors is a function of the representation / policy generalizing better, or if the representation simplifies the RL optimization process, and thus enabling higher performance on the testing set. Furthermore, is the policy able to generalize better because of the choice of representation, or because the policy is of simplified form (linear) - would these hold for more expressive policies using this predictive representation? Finally, I would encourage the authors to include results for the comparison where the current cumulants are predicted from images as mentioned in Chen et. al 2015. The experimental setup is well explained in the main text and in the appendix. Specific Comments: 1. Citations are incorrectly formatted throughout the paper 2. In the first paragraph of Section 1.1, the bicycle model is never explained 3. The paper does not define what a predictive representation is when it is first mentioned in Section 3 4. The units for the X-axis for Figure 3 are not specified (also Figure 4, Figure 6) 5. Equation 5 is missing a constant of 2 6. As the authors deal with PSRs, the original paper introducing PSRs [1] should be cited. [1] Littman et al, *Predictive Representations of State*, NeurIPS 2001
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning an off-policy predictive state representation for deep reinforcement learning for vision-based steering in autonomous driving\nAn algorithm is introduced for learning a predictive state representation with off-policy temporal difference (TD) learning that is then used to learn to steer a vehicle with reinforcement learning. There are three components being learned simultaneously: (1) the off-policy predictions as a compact representation of state, (2) the behavior policy distribution for estimating the off-policy predictions, and (3) the deterministic policy gradient for learning to act. A behavior policy discriminator is learned and used for estimating the important sampling ratios needed to learn the predictive representation off-policy with general value functions (GVFs). A linear deterministic policy gradient method is used to train the agent with only the predictive representations while the predictions are being learned. All three components are combined, demonstrated and evaluated on the problem of steering the vehicle from images in the TORCS racing simulator environment. Steering from only images is a challenging problem where evaluation is completed on a held-out set of tracks that were never seen during training in order to measure the generalization of the predictions and controller. Experiments show the proposed method is able to steer smoothly and navigate many but not all of the tracks available in TORCS with performance that exceeds DDPG using only images as input and approaches the performance of an ideal non-vision based kinematics model.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper considers the usage of learnt general value functions (GVFs) as representations for downstream control tasks in autonomous driving. The authors learn GVFs for hand-defined cumulant functions from off-policy data using TD-learning, using estimated importance weights to keep updates unbiased. A predictive representation (forecast) is chosen by the authors, which measures the “lane-centeredness” and “road angle” of the car at various effective horizons. The authors perform an experimental evaluation on TORCS using images, where they demonstrate that control policies learnt using such a predictive representation generalize better than policies which learn end-to-end directly from images. The use and discovery of predictive representations for RL, especially in the deep setting, is an interesting and important problem, relatively under-explored in the deep RL setting. The paper presents little technical novelty, however, the success of learning systems to perform control with predictive representations in this autonomous driving setting is of interest to the ICLR community. However, the paper suffers from lack of clarity and in presentation, in exposition and in the experimental section. Furthermore, I believe the experimental evaluation doesn’t shed light on the performance of the predictive representation estimation, and leaves questions open as to whether the representation improves generalization or alleviates training problems. With these concerns in hand, I believe this paper holds promise, but in its current state is not yet ready for publication at ICLR. Technical Contribution: The use of predictive state representations / forecasts is to my knowledge a relatively under-explored setting in deep RL, so the success of such representations for autonomous driving is of interest. The primary technical contribution is techniques to use a continually learnt value function as the representation for a downstream policy. The adversarial density estimation procedure introduced in the paper is not novel, and has been studied previously extensively in the community, especially in inverse RL and exploration. Presentation / Clarity: The presentation of the paper can be much improved, in detail and choice of language. Figures and their corresponding captions in Section 4 are difficult to parse. Most figures lack a legend, some lack axis labels, and captions are often non-informative. I found the exposition unclear / confusing at times, and would recommend the authors to revise the paper to improve readability in general, with focus on the introduction and experimental section. Experimental Evaluation: The experimental evaluation is missing an analysis of the learnt predictive representation (GVFs). In particular, I would like to see a stronger analysis investigating whether the learnt value functions are accurate, how fast /effectively these representations adapt to changes in the policy, how accurate the estimated density function is (using the adversarial technique), and whether this importance sampling term is necessary in practice for GVF estimation (as it seems that the experiments are run in a more online fashion). An analysis of this form would lend greater support to the argument that the proposed policy evaluation scheme is effective and useful - and that the predictive representation is actually predicting the desired quantities. No training curves are indicated anywhere, so I also have remaining questions as to whether the gain in performance presented by the authors is a function of the representation / policy generalizing better, or if the representation simplifies the RL optimization process, and thus enabling higher performance on the testing set. Furthermore, is the policy able to generalize better because of the choice of representation, or because the policy is of simplified form (linear) - would these hold for more expressive policies using this predictive representation? Finally, I would encourage the authors to include results for the comparison where the current cumulants are predicted from images as mentioned in Chen et. al 2015. The experimental setup is well explained in the main text and in the appendix. Specific Comments: 1. Citations are incorrectly formatted throughout the paper 2. In the first paragraph of Section 1.1, the bicycle model is never explained 3. The paper does not define what a predictive representation is when it is first mentioned in Section 3 4. The units for the X-axis for Figure 3 are not specified (also Figure 4, Figure 6) 5. Equation 5 is missing a constant of 2 6. As the authors deal with PSRs, the original paper introducing PSRs [1] should be cited. [1] Littman et al, *Predictive Representations of State*, NeurIPS 2001", "role": "assistant" } ]
[ "This paper considers the usage of learnt general value functions (GVFs) as representations for downstream control tasks in autonomous driving.", "The authors learn GVFs for hand-defined cumulant functions from off-policy data using TD-learning, using estimated importance weights to keep updates unbiased.", "A predictive representation (forecast) is chosen by the authors, which measures the “lane-centeredness” and “road angle” of the car at various effective horizons.", "The authors perform an experimental evaluation on TORCS using images, where they demonstrate that control policies learnt using such a predictive representation generalize better than policies which learn end-to-end directly from images.", "The use and discovery of predictive representations for RL, especially in the deep setting, is an interesting and important problem, relatively under-explored in the deep RL setting.", "The paper presents little technical novelty, however, the success of learning systems to perform control with predictive representations in this autonomous driving setting is of interest to the ICLR community.", "However, the paper suffers from lack of clarity and in presentation, in exposition and in the experimental section.", "Furthermore, I believe the experimental evaluation doesn’t shed light on the performance of the predictive representation estimation, and leaves questions open as to whether the representation improves generalization or alleviates training problems.", "With these concerns in hand, I believe this paper holds promise, but in its current state is not yet ready for publication at ICLR.", "Technical Contribution: The use of predictive state representations / forecasts is to my knowledge a relatively under-explored setting in deep RL, so the success of such representations for autonomous driving is of interest.", "The primary technical contribution is techniques to use a continually learnt value function as the representation for a downstream policy.", "The adversarial density estimation procedure introduced in the paper is not novel, and has been studied previously extensively in the community, especially in inverse RL and exploration.", "Presentation / Clarity: The presentation of the paper can be much improved, in detail and choice of language.", "Figures and their corresponding captions in Section 4 are difficult to parse.", "Most figures lack a legend, some lack axis labels, and captions are often non-informative.", "I found the exposition unclear / confusing at times, and would recommend the authors to revise the paper to improve readability in general, with focus on the introduction and experimental section.", "Experimental Evaluation: The experimental evaluation is missing an analysis of the learnt predictive representation (GVFs).", "In particular, I would like to see a stronger analysis investigating whether the learnt value functions are accurate, how fast /effectively these representations adapt to changes in the policy, how accurate the estimated density function is (using the adversarial technique), and whether this importance sampling term is necessary in practice for GVF estimation (as it seems that the experiments are run in a more online fashion).", "An analysis of this form would lend greater support to the argument that the proposed policy evaluation scheme is effective and useful - and that the predictive representation is actually predicting the desired quantities.", "No training curves are indicated anywhere, so I also have remaining questions as to whether the gain in performance presented by the authors is a function of the representation / policy generalizing better, or if the representation simplifies the RL optimization process, and thus enabling higher performance on the testing set.", "Furthermore, is the policy able to generalize better because of the choice of representation, or because the policy is of simplified form (linear) - would these hold for more expressive policies using this predictive representation?", "Finally, I would encourage the authors to include results for the comparison where the current cumulants are predicted from images as mentioned in Chen et.", "al 2015.", "The experimental setup is well explained in the main text and in the appendix.", "Specific Comments: 1.", "Citations are incorrectly formatted throughout the paper 2.", "In the first paragraph of Section 1.1, the bicycle model is never explained 3.", "The paper does not define what a predictive representation is when it is first mentioned in Section 3 4.", "The units for the X-axis for Figure 3 are not specified (also Figure 4, Figure 6) 5.", "Equation 5 is missing a constant of 2 6.", "As the authors deal with PSRs, the original paper introducing PSRs [1] should be cited.", "[1] Littman et al, *Predictive Representations of State*, NeurIPS 2001" ]
{ "criticism": 15, "example": 1, "importance_and_relevance": 7, "materials_and_methods": 17, "praise": 7, "presentation_and_reporting": 12, "results_and_discussion": 5, "suggestion_and_solution": 6, "total": 32 }
0.46875
0.03125
0.21875
0.53125
0.21875
0.375
0.15625
0.1875
32
1
1
1
1
1
1
1
1
2.1875
-3.509957
5.697457
Universal Source-Free Domain Adaptation
There is a strong incentive to develop versatile learning techniques that can transfer the knowledge of class-separability from a labeled source domain to an unlabeled target domain in the presence of a domain-shift. Existing domain adaptation (DA) approaches are not equipped for practical DA scenarios as a result of their reliance on the knowledge of source-target label-set relationship (e.g. Closed-set, Open-set or Partial DA). Furthermore, almost all the prior unsupervised DA works require coexistence of source and target samples even during deployment, making them unsuitable for incremental, real-time adaptation. Devoid of such highly impractical assumptions, we propose a novel two-stage learning process. Initially, in the procurement-stage, the objective is to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift. To achieve this, we enhance the model’s ability to reject out-of-source distribution samples by leveraging the available source data, in a novel generative classifier framework. Subsequently, in the deployment-stage, the objective is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps, with no access to the previously seen source samples. To achieve this, in contrast to the usage of complex adversarial training regimes, we define a simple yet effective source-free adaptation objective by utilizing a novel instance-level weighing mechanism, named as Source Similarity Metric (SSM). A thorough evaluation shows the practical usability of the proposed learning framework with superior DA performance even over state-of-the-art source-dependent approaches.
This interesting paper on an important topic is extremely hard to read because it assumes that the reader is deeply familiar with the Domain Adaptation. In order to make the paper more accessible to a general audience, the authors should: 1) follow the 1st sentence in the abstract by a new sentence that provides an intuitive example the idea of domain shift from labeled source to unlabeled target domain 2) re-organize the current intro, which reads more like related work, into a more traditional format - one intuitive paragraph about domain adaptation, source/target domain, and domain shift; provide at least one compelling illustrative example from a real-world application - one intuitive paragraph with an illustrative example on how the proposed approach will help solve a problem; at the same intuitive level, compare-and-contrast it with existing approaches - one intuitive, in-detail paragraph on how the proposed approach works - one paragraph summarizing the main findings/results 3) ideally, add a section with an illustrative running example, which would have a huge impact on the paper*s readability
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUniversal Source-Free Domain Adaptation\nThere is a strong incentive to develop versatile learning techniques that can transfer the knowledge of class-separability from a labeled source domain to an unlabeled target domain in the presence of a domain-shift. Existing domain adaptation (DA) approaches are not equipped for practical DA scenarios as a result of their reliance on the knowledge of source-target label-set relationship (e.g. Closed-set, Open-set or Partial DA). Furthermore, almost all the prior unsupervised DA works require coexistence of source and target samples even during deployment, making them unsuitable for incremental, real-time adaptation. Devoid of such highly impractical assumptions, we propose a novel two-stage learning process. Initially, in the procurement-stage, the objective is to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift. To achieve this, we enhance the model’s ability to reject out-of-source distribution samples by leveraging the available source data, in a novel generative classifier framework. Subsequently, in the deployment-stage, the objective is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps, with no access to the previously seen source samples. To achieve this, in contrast to the usage of complex adversarial training regimes, we define a simple yet effective source-free adaptation objective by utilizing a novel instance-level weighing mechanism, named as Source Similarity Metric (SSM). A thorough evaluation shows the practical usability of the proposed learning framework with superior DA performance even over state-of-the-art source-dependent approaches.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis interesting paper on an important topic is extremely hard to read because it assumes that the reader is deeply familiar with the Domain Adaptation. In order to make the paper more accessible to a general audience, the authors should: 1) follow the 1st sentence in the abstract by a new sentence that provides an intuitive example the idea of domain shift from labeled source to unlabeled target domain 2) re-organize the current intro, which reads more like related work, into a more traditional format - one intuitive paragraph about domain adaptation, source/target domain, and domain shift; provide at least one compelling illustrative example from a real-world application - one intuitive paragraph with an illustrative example on how the proposed approach will help solve a problem; at the same intuitive level, compare-and-contrast it with existing approaches - one intuitive, in-detail paragraph on how the proposed approach works - one paragraph summarizing the main findings/results 3) ideally, add a section with an illustrative running example, which would have a huge impact on the paper*s readability", "role": "assistant" } ]
[ "This interesting paper on an important topic is extremely hard to read because it assumes that the reader is deeply familiar with the Domain Adaptation.", "In order to make the paper more accessible to a general audience, the authors should: 1) follow the 1st sentence in the abstract by a new sentence that provides an intuitive example the idea of domain shift from labeled source to unlabeled target domain 2) re-organize the current intro, which reads more like related work, into a more traditional format - one intuitive paragraph about domain adaptation, source/target domain, and domain shift; provide at least one compelling illustrative example from a real-world application - one intuitive paragraph with an illustrative example on how the proposed approach will help solve a problem; at the same intuitive level, compare-and-contrast it with existing approaches - one intuitive, in-detail paragraph on how the proposed approach works - one paragraph summarizing the main findings/results 3) ideally, add a section with an illustrative running example, which would have a huge impact on the paper*s readability" ]
{ "criticism": 1, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 0, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 2 }
0.5
0.5
0.5
0
0.5
1
0.5
0.5
2
1
1
1
0
1
1
1
1
4
2.090326
1.909674
Locally adaptive activation functions with slope recovery term for deep and physics-informed neural networks
We propose two approaches of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions, which improve the performance of deep and physics-informed neural networks. The local adaptation of activation function is achieved by introducing scalable hyper-parameters in each layer (layer-wise) and for every neuron separately (neuron-wise), and then optimizing it using the stochastic gradient descent algorithm. Introduction of neuron-wise activation function acts like a vector activation function as opposed to the traditional scalar activation function given by fixed, global and layer-wise activations. In order to further increase the training speed, an activation slope based slope recovery term is added in the loss function, which further accelerate convergence, thereby reducing the training cost. For numerical experiments, a nonlinear discontinuous function is approximated using a deep neural network with layer-wise and neuron-wise locally adaptive activation functions with and without the slope recovery term and compared with its global counterpart. Moreover, solution of the nonlinear Burgers equation, which exhibits steep gradients, is also obtained using the proposed methods. On the theoretical side, we prove that in the proposed method the gradient descent algorithms are not attracted to sub-optimal critical points or local minima under practical conditions on the initialization and learning rate. Furthermore, the proposed adaptive activation functions with the slope recovery are shown to accelerate the training process in standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion data sets with and without data augmentation.
This paper introduces a parameter to scale the input to an activation function and a loss function to encourage the scaling parameter to be large. The authors test it to learn a discontinuous function as well as several image datasets. The concept of learnable activation functions is not new and this work is fairly incremental. However, for the image dataset results it seems the authors are using ReLU for their activation functions. Having a scaling parameter before the input to ReLU and encouraging that scaling parameter to be large is similar to increasing the learning rate. There are no experiments to verify whether or not this is the case. Also, the authors only report loss and not accuracy on these datasets.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLocally adaptive activation functions with slope recovery term for deep and physics-informed neural networks\nWe propose two approaches of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions, which improve the performance of deep and physics-informed neural networks. The local adaptation of activation function is achieved by introducing scalable hyper-parameters in each layer (layer-wise) and for every neuron separately (neuron-wise), and then optimizing it using the stochastic gradient descent algorithm. Introduction of neuron-wise activation function acts like a vector activation function as opposed to the traditional scalar activation function given by fixed, global and layer-wise activations. In order to further increase the training speed, an activation slope based slope recovery term is added in the loss function, which further accelerate convergence, thereby reducing the training cost. For numerical experiments, a nonlinear discontinuous function is approximated using a deep neural network with layer-wise and neuron-wise locally adaptive activation functions with and without the slope recovery term and compared with its global counterpart. Moreover, solution of the nonlinear Burgers equation, which exhibits steep gradients, is also obtained using the proposed methods. On the theoretical side, we prove that in the proposed method the gradient descent algorithms are not attracted to sub-optimal critical points or local minima under practical conditions on the initialization and learning rate. Furthermore, the proposed adaptive activation functions with the slope recovery are shown to accelerate the training process in standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion data sets with and without data augmentation.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a parameter to scale the input to an activation function and a loss function to encourage the scaling parameter to be large. The authors test it to learn a discontinuous function as well as several image datasets. The concept of learnable activation functions is not new and this work is fairly incremental. However, for the image dataset results it seems the authors are using ReLU for their activation functions. Having a scaling parameter before the input to ReLU and encouraging that scaling parameter to be large is similar to increasing the learning rate. There are no experiments to verify whether or not this is the case. Also, the authors only report loss and not accuracy on these datasets.", "role": "assistant" } ]
[ "This paper introduces a parameter to scale the input to an activation function and a loss function to encourage the scaling parameter to be large.", "The authors test it to learn a discontinuous function as well as several image datasets.", "The concept of learnable activation functions is not new and this work is fairly incremental.", "However, for the image dataset results it seems the authors are using ReLU for their activation functions.", "Having a scaling parameter before the input to ReLU and encouraging that scaling parameter to be large is similar to increasing the learning rate.", "There are no experiments to verify whether or not this is the case.", "Also, the authors only report loss and not accuracy on these datasets." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 0, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 2, "total": 7 }
0.285714
0
0.142857
1
0
0
0.428571
0.285714
7
1
0
1
1
0
0
1
1
2.142857
1.57469
0.568167
Fast Sparse ConvNets
Historically, the pursuit of efficient inference has been one of the driving forces be-hind the research into new deep learning architectures and building blocks. Some of the recent examples include: the squeeze-and-excitation module of (Hu et al.,2018), depthwise separable convolutions in Xception (Chollet, 2017), and the inverted bottleneck in MobileNet v2 (Sandler et al., 2018). Notably, in all of these cases, the resulting building blocks enabled not only higher efficiency, but also higher accuracy, and found wide adoption in the field. In this work, we further expand the arsenal of efficient building blocks for neural network architectures; but instead of combining standard primitives (such as convolution), we advocate for the replacement of these dense primitives with their sparse counterparts. While the idea of using sparsity to decrease the parameter count is not new (Mozer & Smolensky, 1989), the conventional wisdom is that this reduction in theoretical FLOPs does not translate into real-world efficiency gains. We aim to correct this misconception by introducing a family of efficient sparse kernels for several hardware platforms, which we plan to open-source for the benefit of the community. Equipped with our efficient implementation of sparse primitives, we show that sparse versions of MobileNet v1 and MobileNet v2 architectures substantially outperform strong dense baselines on the efficiency-accuracy curve. On Snapdragon 835 our sparse networks outperform their dense equivalents by 1.1?2.2x – equivalent to approximately one entire generation of improvement. We hope that our findings will facilitate wider adoption of sparsity as a tool for creating efficient and accurate deep learning architectures.
This paper empirically shows that using weight sparsity to accelerate state-of-the-art convolutional networks is possible by providing fast kernels ARM and x86 processors. Overall the intuitions are reasonable and the experiments show that the methods and rules of the kernel operations are providing some advantages. This ideas used in the paper doesn*t seem to be novel, however, I am not knowledgeable enough to rebut or provide existing work references. The paper doesn*t compare the sparse results with other sparsity methods.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFast Sparse ConvNets\nHistorically, the pursuit of efficient inference has been one of the driving forces be-hind the research into new deep learning architectures and building blocks. Some of the recent examples include: the squeeze-and-excitation module of (Hu et al.,2018), depthwise separable convolutions in Xception (Chollet, 2017), and the inverted bottleneck in MobileNet v2 (Sandler et al., 2018). Notably, in all of these cases, the resulting building blocks enabled not only higher efficiency, but also higher accuracy, and found wide adoption in the field. In this work, we further expand the arsenal of efficient building blocks for neural network architectures; but instead of combining standard primitives (such as convolution), we advocate for the replacement of these dense primitives with their sparse counterparts. While the idea of using sparsity to decrease the parameter count is not new (Mozer & Smolensky, 1989), the conventional wisdom is that this reduction in theoretical FLOPs does not translate into real-world efficiency gains. We aim to correct this misconception by introducing a family of efficient sparse kernels for several hardware platforms, which we plan to open-source for the benefit of the community. Equipped with our efficient implementation of sparse primitives, we show that sparse versions of MobileNet v1 and MobileNet v2 architectures substantially outperform strong dense baselines on the efficiency-accuracy curve. On Snapdragon 835 our sparse networks outperform their dense equivalents by 1.1?2.2x – equivalent to approximately one entire generation of improvement. We hope that our findings will facilitate wider adoption of sparsity as a tool for creating efficient and accurate deep learning architectures.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper empirically shows that using weight sparsity to accelerate state-of-the-art convolutional networks is possible by providing fast kernels ARM and x86 processors. Overall the intuitions are reasonable and the experiments show that the methods and rules of the kernel operations are providing some advantages. This ideas used in the paper doesn*t seem to be novel, however, I am not knowledgeable enough to rebut or provide existing work references. The paper doesn*t compare the sparse results with other sparsity methods.", "role": "assistant" } ]
[ "This paper empirically shows that using weight sparsity to accelerate state-of-the-art convolutional networks is possible by providing fast kernels ARM and x86 processors.", "Overall the intuitions are reasonable and the experiments show that the methods and rules of the kernel operations are providing some advantages.", "This ideas used in the paper doesn*t seem to be novel, however, I am not knowledgeable enough to rebut or provide existing work references.", "The paper doesn*t compare the sparse results with other sparsity methods." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 3, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 2, "suggestion_and_solution": 0, "total": 4 }
0.5
0
0.25
0.75
0.25
0
0.5
0
4
1
0
1
1
1
0
1
0
2.25
0.971623
1.278377
Towards a Unified Evaluation of Explanation Methods without Ground Truth
This paper proposes a set of criteria to evaluate the objectiveness of explanation methods of neural networks, which is crucial for the development of explainable AI, but it also presents significant challenges. The core challenge is that people usually cannot obtain ground-truth explanations of the neural network. To this end, we design four metrics to evaluate the explanation result without ground-truth explanations. Our metrics can be broadly applied to nine benchmark methods of interpreting neural networks, which provides new insights of explanation methods.
The paper proposes four metrics to evaluate explanation methods, namely 1) bias of the attribution map based on Shapley value, 2) unexplainable feature components, 3) robustness of explanations, and 4) mutual verification (whether two explanation methods can verify each other). These metrics focus on explanation methods that give attribution maps as outputs (such as a pixel-level attention or saliency map). Besides, it also evaluates nine widely used explanation approaches with these metrics. I would recommend for acceptance. Several explanation methods have been proposed to generate attribution maps. However, it is shown that it is hard to inspect the quality of these explanation methods visually (e.g., as mentioned in http://arxiv.org/abs/1810.03292). I believe the proposed metrics provide valuable insights on quantifying the performance of explanation methods. The evaluation results on nine existing approaches are also helpful to future work on explainable AI. However, I would also like to note that while the four metrics proposed in the paper reflect the desired properties of explanation methods, they should not be the only criteria. The desired criteria depend on the application, and methods without these properties could also be potentially useful or even better in some cases. For example, instead of giving an unbiased attribution map, it would be sometimes more helpful to provide a discrete binary mask of regions of interest (e.g., for downstream image processing).
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTowards a Unified Evaluation of Explanation Methods without Ground Truth\nThis paper proposes a set of criteria to evaluate the objectiveness of explanation methods of neural networks, which is crucial for the development of explainable AI, but it also presents significant challenges. The core challenge is that people usually cannot obtain ground-truth explanations of the neural network. To this end, we design four metrics to evaluate the explanation result without ground-truth explanations. Our metrics can be broadly applied to nine benchmark methods of interpreting neural networks, which provides new insights of explanation methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes four metrics to evaluate explanation methods, namely 1) bias of the attribution map based on Shapley value, 2) unexplainable feature components, 3) robustness of explanations, and 4) mutual verification (whether two explanation methods can verify each other). These metrics focus on explanation methods that give attribution maps as outputs (such as a pixel-level attention or saliency map). Besides, it also evaluates nine widely used explanation approaches with these metrics. I would recommend for acceptance. Several explanation methods have been proposed to generate attribution maps. However, it is shown that it is hard to inspect the quality of these explanation methods visually (e.g., as mentioned in http://arxiv.org/abs/1810.03292). I believe the proposed metrics provide valuable insights on quantifying the performance of explanation methods. The evaluation results on nine existing approaches are also helpful to future work on explainable AI. However, I would also like to note that while the four metrics proposed in the paper reflect the desired properties of explanation methods, they should not be the only criteria. The desired criteria depend on the application, and methods without these properties could also be potentially useful or even better in some cases. For example, instead of giving an unbiased attribution map, it would be sometimes more helpful to provide a discrete binary mask of regions of interest (e.g., for downstream image processing).", "role": "assistant" } ]
[ "The paper proposes four metrics to evaluate explanation methods, namely 1) bias of the attribution map based on Shapley value, 2) unexplainable feature components, 3) robustness of explanations, and 4) mutual verification (whether two explanation methods can verify each other).", "These metrics focus on explanation methods that give attribution maps as outputs (such as a pixel-level attention or saliency map).", "Besides, it also evaluates nine widely used explanation approaches with these metrics.", "I would recommend for acceptance.", "Several explanation methods have been proposed to generate attribution maps.", "However, it is shown that it is hard to inspect the quality of these explanation methods visually (e.g., as mentioned in http://arxiv.org/abs/1810.03292).", "I believe the proposed metrics provide valuable insights on quantifying the performance of explanation methods.", "The evaluation results on nine existing approaches are also helpful to future work on explainable AI.", "However, I would also like to note that while the four metrics proposed in the paper reflect the desired properties of explanation methods, they should not be the only criteria.", "The desired criteria depend on the application, and methods without these properties could also be potentially useful or even better in some cases.", "For example, instead of giving an unbiased attribution map, it would be sometimes more helpful to provide a discrete binary mask of regions of interest (e.g., for downstream image processing)." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 10, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 4, "suggestion_and_solution": 4, "total": 11 }
0.090909
0
0.181818
0.909091
0.272727
0.090909
0.363636
0.363636
11
1
0
1
1
1
1
1
1
2.272727
2.209598
0.06313
Towards a Unified Evaluation of Explanation Methods without Ground Truth
This paper proposes a set of criteria to evaluate the objectiveness of explanation methods of neural networks, which is crucial for the development of explainable AI, but it also presents significant challenges. The core challenge is that people usually cannot obtain ground-truth explanations of the neural network. To this end, we design four metrics to evaluate the explanation result without ground-truth explanations. Our metrics can be broadly applied to nine benchmark methods of interpreting neural networks, which provides new insights of explanation methods.
This paper considers the important problem of evaluating explanation methods for neural models without recourse to `ground-truth* explanations. The authors rightly highlight that plausibility (to humans) does not imply faithfulness to the underlying models. They then define a suite of metrics, largely drawing upon known techniques, to evaluate explanations. For instance, the authors propose evaluating the agreement between model attributions and Shapley values. The latter are approximated in two ways: Only pixels with high (or low) attribution values are considered; and sampling is used to make estimation tractable. Overall, this is a nice paper that investigates an important problem --- I particularly appreciated the comparison of multiple attribution methods across these suite of metrics. However, the results are something of a mixed bag, and I was left without a clear takeaway. I am not sure the ICLR audience will get much out of this submission. It would be nice if the authors could provide additional guidance concerning the interpretation of the various metrics they propose, and for which scenarios they view these as appropriate. Strengths --- + The topic of explainability is of increasing importance, and the authors focus on an important question regarding this: How can we evaluate attribution methods? + The approach of defining a suite of metrics, rather than only one, is welcome given that different applications will call for different types of attribution. + The comparison of existing methods for attribution across a range of metrics is a nice contribution. That said, the results are sort of a mixed bag and it is not obvious what one should take away from this. Weaknesses --- - Much of this work (and the proposed metrics) seems to derive from prior efforts. I*m not sure how much novelty there is here, although there is a value in bringing these metrics together. - To the latter point, however, the authors do not really speak to the pros, cons, and applicability or relevance of the different metrics considered. A user of this suite of metrics would therefore be left with four separate measures, and it is not immediately clear how to interpret these, or what the relative performance of explainability methods with respect to these metrics tells us. - I am not sure I followed the unexplainable feature component metric (3.3). I found the notation and terminology here difficult to parse; can the authors please clarify what they mean by *unexplainable feature components*, and also clarify what represent (I think induced feature representations of the image I but again am not certain). From what I gather this is somehow measuring the peakiness (or lack thereof) of attribution values, but it is not obvious to me that this is an inherently desirable property. - The robustness metric seems to make the assumption that the model does not capture interactions, i.e., a pixel j may be important only when seen together with some other pixel i. I*m not sure this assumption is warranted; perhaps the authors can elaborate to provide further intuition.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTowards a Unified Evaluation of Explanation Methods without Ground Truth\nThis paper proposes a set of criteria to evaluate the objectiveness of explanation methods of neural networks, which is crucial for the development of explainable AI, but it also presents significant challenges. The core challenge is that people usually cannot obtain ground-truth explanations of the neural network. To this end, we design four metrics to evaluate the explanation result without ground-truth explanations. Our metrics can be broadly applied to nine benchmark methods of interpreting neural networks, which provides new insights of explanation methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper considers the important problem of evaluating explanation methods for neural models without recourse to `ground-truth* explanations. The authors rightly highlight that plausibility (to humans) does not imply faithfulness to the underlying models. They then define a suite of metrics, largely drawing upon known techniques, to evaluate explanations. For instance, the authors propose evaluating the agreement between model attributions and Shapley values. The latter are approximated in two ways: Only pixels with high (or low) attribution values are considered; and sampling is used to make estimation tractable. Overall, this is a nice paper that investigates an important problem --- I particularly appreciated the comparison of multiple attribution methods across these suite of metrics. However, the results are something of a mixed bag, and I was left without a clear takeaway. I am not sure the ICLR audience will get much out of this submission. It would be nice if the authors could provide additional guidance concerning the interpretation of the various metrics they propose, and for which scenarios they view these as appropriate. Strengths --- + The topic of explainability is of increasing importance, and the authors focus on an important question regarding this: How can we evaluate attribution methods? + The approach of defining a suite of metrics, rather than only one, is welcome given that different applications will call for different types of attribution. + The comparison of existing methods for attribution across a range of metrics is a nice contribution. That said, the results are sort of a mixed bag and it is not obvious what one should take away from this. Weaknesses --- - Much of this work (and the proposed metrics) seems to derive from prior efforts. I*m not sure how much novelty there is here, although there is a value in bringing these metrics together. - To the latter point, however, the authors do not really speak to the pros, cons, and applicability or relevance of the different metrics considered. A user of this suite of metrics would therefore be left with four separate measures, and it is not immediately clear how to interpret these, or what the relative performance of explainability methods with respect to these metrics tells us. - I am not sure I followed the unexplainable feature component metric (3.3). I found the notation and terminology here difficult to parse; can the authors please clarify what they mean by *unexplainable feature components*, and also clarify what represent (I think induced feature representations of the image I but again am not certain). From what I gather this is somehow measuring the peakiness (or lack thereof) of attribution values, but it is not obvious to me that this is an inherently desirable property. - The robustness metric seems to make the assumption that the model does not capture interactions, i.e., a pixel j may be important only when seen together with some other pixel i. I*m not sure this assumption is warranted; perhaps the authors can elaborate to provide further intuition.", "role": "assistant" } ]
[ "This paper considers the important problem of evaluating explanation methods for neural models without recourse to `ground-truth* explanations.", "The authors rightly highlight that plausibility (to humans) does not imply faithfulness to the underlying models.", "They then define a suite of metrics, largely drawing upon known techniques, to evaluate explanations.", "For instance, the authors propose evaluating the agreement between model attributions and Shapley values.", "The latter are approximated in two ways: Only pixels with high (or low) attribution values are considered; and sampling is used to make estimation tractable.", "Overall, this is a nice paper that investigates an important problem --- I particularly appreciated the comparison of multiple attribution methods across these suite of metrics.", "However, the results are something of a mixed bag, and I was left without a clear takeaway.", "I am not sure the ICLR audience will get much out of this submission.", "It would be nice if the authors could provide additional guidance concerning the interpretation of the various metrics they propose, and for which scenarios they view these as appropriate.", "Strengths --- + The topic of explainability is of increasing importance, and the authors focus on an important question regarding this: How can we evaluate attribution methods?", "+ The approach of defining a suite of metrics, rather than only one, is welcome given that different applications will call for different types of attribution.", "+ The comparison of existing methods for attribution across a range of metrics is a nice contribution.", "That said, the results are sort of a mixed bag and it is not obvious what one should take away from this.", "Weaknesses --- - Much of this work (and the proposed metrics) seems to derive from prior efforts.", "I*m not sure how much novelty there is here, although there is a value in bringing these metrics together.", "- To the latter point, however, the authors do not really speak to the pros, cons, and applicability or relevance of the different metrics considered.", "A user of this suite of metrics would therefore be left with four separate measures, and it is not immediately clear how to interpret these, or what the relative performance of explainability methods with respect to these metrics tells us.", "- I am not sure I followed the unexplainable feature component metric (3.3).", "I found the notation and terminology here difficult to parse; can the authors please clarify what they mean by *unexplainable feature components*, and also clarify what represent (I think induced feature representations of the image I but again am not certain).", "From what I gather this is somehow measuring the peakiness (or lack thereof) of attribution values, but it is not obvious to me that this is an inherently desirable property.", "- The robustness metric seems to make the assumption that the model does not capture interactions, i.e., a pixel j may be important only when seen together with some other pixel i. I*m not sure this assumption is warranted; perhaps the authors can elaborate to provide further intuition." ]
{ "criticism": 12, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 17, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 5, "suggestion_and_solution": 4, "total": 21 }
0.571429
0
0.238095
0.809524
0.190476
0.047619
0.238095
0.190476
21
1
0
1
1
1
1
1
1
2.285714
1.275639
1.010076
Quadratic GCN for graph classification
Graph Convolutional Networks (GCN) have been extensively used to classify nodes in graphs and have been shown to outperform most other node classification methods. However, there are currently few GCN based formalism for graph classification tasks. In this task, graphs of different sizes (e.g. graphs representing different protein structures) belong to different classes, and one attempts to predict the graph class. We here propose a solution combing GCN and methods from knowledge graphs to produce a quadratic GCN (Q-GCN). We extend the GCN formalism by adding a quadratic layer to a standard GCN to classify full graphs. Such a layer produces an output with dimensions independent of the graph node number. This output is then passed through a softmax to classify full graphs. We applied this method to a wide range of graph classification problems and show that such a straightforward formalism outperforms state of the art methods for binary graph classification with or without external input on each graph.
The paper proposes a simple modification to existing graph NN methods for classifying the entire graph of possibly variable size. The problem is of interest. The proposed method is on the other hand fairly simple and straightforward, as shown in Fig.1 and eq(4), as a simple function of the adjacency matrix A that incorporate the entire graph structure. The results are verified on four datasets with promising results. I have a number of concerns with the paper: *The presentation: The paper seems to come out in a rush, with many critical details missing. For example, the main method, which is the main technical contribution, is described within less 1-page length. The figure 1 is also not very helpful in understanding the details of the method. In experiment section, four datasets are presented, where many of them are introduced without any info about the dataset size, graph size, etc. The comparison methods used in Table 1 are also minimal (only 1 method is compared with in each dataset), and many of the methods are decade old. Some of the comparison methods are even without any formal reference so there is literally no precise info about where the methods are exactly from. Overall it is very difficult for an average reader to be fully convinced of the significance and professionalism of the work. **empirical evaluation: As aforementioned, the experiments are not compared with the recent work and detailed info is critically missing. **algorithmic contribution: this is also a question mark.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQuadratic GCN for graph classification\nGraph Convolutional Networks (GCN) have been extensively used to classify nodes in graphs and have been shown to outperform most other node classification methods. However, there are currently few GCN based formalism for graph classification tasks. In this task, graphs of different sizes (e.g. graphs representing different protein structures) belong to different classes, and one attempts to predict the graph class. We here propose a solution combing GCN and methods from knowledge graphs to produce a quadratic GCN (Q-GCN). We extend the GCN formalism by adding a quadratic layer to a standard GCN to classify full graphs. Such a layer produces an output with dimensions independent of the graph node number. This output is then passed through a softmax to classify full graphs. We applied this method to a wide range of graph classification problems and show that such a straightforward formalism outperforms state of the art methods for binary graph classification with or without external input on each graph.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a simple modification to existing graph NN methods for classifying the entire graph of possibly variable size. The problem is of interest. The proposed method is on the other hand fairly simple and straightforward, as shown in Fig.1 and eq(4), as a simple function of the adjacency matrix A that incorporate the entire graph structure. The results are verified on four datasets with promising results. I have a number of concerns with the paper: *The presentation: The paper seems to come out in a rush, with many critical details missing. For example, the main method, which is the main technical contribution, is described within less 1-page length. The figure 1 is also not very helpful in understanding the details of the method. In experiment section, four datasets are presented, where many of them are introduced without any info about the dataset size, graph size, etc. The comparison methods used in Table 1 are also minimal (only 1 method is compared with in each dataset), and many of the methods are decade old. Some of the comparison methods are even without any formal reference so there is literally no precise info about where the methods are exactly from. Overall it is very difficult for an average reader to be fully convinced of the significance and professionalism of the work. **empirical evaluation: As aforementioned, the experiments are not compared with the recent work and detailed info is critically missing. **algorithmic contribution: this is also a question mark.", "role": "assistant" } ]
[ "The paper proposes a simple modification to existing graph NN methods for classifying the entire graph of possibly variable size.", "The problem is of interest.", "The proposed method is on the other hand fairly simple and straightforward, as shown in Fig.1 and eq(4), as a simple function of the adjacency matrix A that incorporate the entire graph structure.", "The results are verified on four datasets with promising results.", "I have a number of concerns with the paper: *The presentation: The paper seems to come out in a rush, with many critical details missing.", "For example, the main method, which is the main technical contribution, is described within less 1-page length.", "The figure 1 is also not very helpful in understanding the details of the method.", "In experiment section, four datasets are presented, where many of them are introduced without any info about the dataset size, graph size, etc.", "The comparison methods used in Table 1 are also minimal (only 1 method is compared with in each dataset), and many of the methods are decade old.", "Some of the comparison methods are even without any formal reference so there is literally no precise info about where the methods are exactly from.", "Overall it is very difficult for an average reader to be fully convinced of the significance and professionalism of the work.", "**empirical evaluation: As aforementioned, the experiments are not compared with the recent work and detailed info is critically missing.", "**algorithmic contribution: this is also a question mark." ]
{ "criticism": 6, "example": 3, "importance_and_relevance": 3, "materials_and_methods": 9, "praise": 2, "presentation_and_reporting": 6, "results_and_discussion": 2, "suggestion_and_solution": 0, "total": 13 }
0.461538
0.230769
0.230769
0.692308
0.153846
0.461538
0.153846
0
13
1
1
1
1
1
1
1
0
2.384615
2.384615
0
Quadratic GCN for graph classification
Graph Convolutional Networks (GCN) have been extensively used to classify nodes in graphs and have been shown to outperform most other node classification methods. However, there are currently few GCN based formalism for graph classification tasks. In this task, graphs of different sizes (e.g. graphs representing different protein structures) belong to different classes, and one attempts to predict the graph class. We here propose a solution combing GCN and methods from knowledge graphs to produce a quadratic GCN (Q-GCN). We extend the GCN formalism by adding a quadratic layer to a standard GCN to classify full graphs. Such a layer produces an output with dimensions independent of the graph node number. This output is then passed through a softmax to classify full graphs. We applied this method to a wide range of graph classification problems and show that such a straightforward formalism outperforms state of the art methods for binary graph classification with or without external input on each graph.
Summary This paper combine GCN with KG methods (quadratic kernel) to predict the class of graph-structured data, e.g. protein structures, chemical compounds, etc. Strength The paper proposes an extension of GCN for graph classification, which is an interesting problem. Weakness The use of GCN for graph classification isn’t well motivated. GCN works well for node classification which is a lot different from graph classification and the paper failed to point out why GCN should help? Paragraph 4 of section 2 provides some previous works that also extend GCN for graph classification but still, the motivation of this work should be clearly stated in the introduction. Furthermore, what’s the intuition for using the quadratic layer? How is it better than DIFFPOOL? Regarding the experimental results, it would be better to provide more convincing evidence as well as more information about the datasets being used, e.g., size, number of classes. Also, it isn’t clear if the results in table 1 are binary or multi-class? The paper somewhat readable, but need serious revision of the flow as well as figures, result tables, etc. For example, it’s not necessary to use up entire of page 4 for figure 1 (which also doesn’t provide much information) Overall impression suggests that the paper wasn’t completed and should be carefully revised before submitting to another conference.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQuadratic GCN for graph classification\nGraph Convolutional Networks (GCN) have been extensively used to classify nodes in graphs and have been shown to outperform most other node classification methods. However, there are currently few GCN based formalism for graph classification tasks. In this task, graphs of different sizes (e.g. graphs representing different protein structures) belong to different classes, and one attempts to predict the graph class. We here propose a solution combing GCN and methods from knowledge graphs to produce a quadratic GCN (Q-GCN). We extend the GCN formalism by adding a quadratic layer to a standard GCN to classify full graphs. Such a layer produces an output with dimensions independent of the graph node number. This output is then passed through a softmax to classify full graphs. We applied this method to a wide range of graph classification problems and show that such a straightforward formalism outperforms state of the art methods for binary graph classification with or without external input on each graph.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary This paper combine GCN with KG methods (quadratic kernel) to predict the class of graph-structured data, e.g. protein structures, chemical compounds, etc. Strength The paper proposes an extension of GCN for graph classification, which is an interesting problem. Weakness The use of GCN for graph classification isn’t well motivated. GCN works well for node classification which is a lot different from graph classification and the paper failed to point out why GCN should help? Paragraph 4 of section 2 provides some previous works that also extend GCN for graph classification but still, the motivation of this work should be clearly stated in the introduction. Furthermore, what’s the intuition for using the quadratic layer? How is it better than DIFFPOOL? Regarding the experimental results, it would be better to provide more convincing evidence as well as more information about the datasets being used, e.g., size, number of classes. Also, it isn’t clear if the results in table 1 are binary or multi-class? The paper somewhat readable, but need serious revision of the flow as well as figures, result tables, etc. For example, it’s not necessary to use up entire of page 4 for figure 1 (which also doesn’t provide much information) Overall impression suggests that the paper wasn’t completed and should be carefully revised before submitting to another conference.", "role": "assistant" } ]
[ "Summary This paper combine GCN with KG methods (quadratic kernel) to predict the class of graph-structured data, e.g.", "protein structures, chemical compounds, etc.", "Strength The paper proposes an extension of GCN for graph classification, which is an interesting problem.", "Weakness The use of GCN for graph classification isn’t well motivated.", "GCN works well for node classification which is a lot different from graph classification and the paper failed to point out why GCN should help?", "Paragraph 4 of section 2 provides some previous works that also extend GCN for graph classification but still, the motivation of this work should be clearly stated in the introduction.", "Furthermore, what’s the intuition for using the quadratic layer?", "How is it better than DIFFPOOL?", "Regarding the experimental results, it would be better to provide more convincing evidence as well as more information about the datasets being used, e.g., size, number of classes.", "Also, it isn’t clear if the results in table 1 are binary or multi-class?", "The paper somewhat readable, but need serious revision of the flow as well as figures, result tables, etc.", "For example, it’s not necessary to use up entire of page 4 for figure 1 (which also doesn’t provide much information) Overall impression suggests that the paper wasn’t completed and should be carefully revised before submitting to another conference." ]
{ "criticism": 3, "example": 2, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 3, "presentation_and_reporting": 5, "results_and_discussion": 4, "suggestion_and_solution": 4, "total": 12 }
0.25
0.166667
0.083333
0.583333
0.25
0.416667
0.333333
0.333333
12
1
1
1
1
1
1
1
1
2.416667
2.400884
0.015782
Input Alignment along Chaotic directions increases Stability in Recurrent Neural Networks
Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations. However, it remains unclear how neural circuits encode complex spatio-temporal patterns. We show that neural dynamics are strongly influenced by the phase alignment between the input and the spontaneous chaotic activity. Input alignment along the dominant chaotic projections causes the chaotic trajectories to become stable channels (or attractors), hence, improving the computational capability of a recurrent network. Using mean field analysis, we derive the impact of input alignment on the overall stability of attractors formed. Our results indicate that input alignment determines the extent of intrinsic noise suppression and hence, alters the attractor state stability, thereby controlling the network*s inference ability.
Overview: Using a reservoir of recurrent, randomly-connected spiking neurons, this paper studies how input stimuli may be modified to produce more robust responses. Specifically, this entails adjusting the phase of a sinusoidal input such that the leading principal components of the variation in the reservoir’s response match the leading principal components of its spontaneous and unstimulated chaotic dynamics. The reservoir’s ability to discriminate different stimuli, even in the presence of noise, is shown to be improved by aligning them to different principal components of its chaotic dynamics, both between stimuli and within a single stimulus in the presence of noise. The authors provide a statistical physics-driven outline of the theoretical analysis underpinning this result, and qualitatively demonstrate the effectiveness of their approach to a simple handwriting generation task. This paper addresses an important question and provides a result that is of interest, however, the overlap between this and previous related work is quite significant. In particular, this paper is heavily indebted to Rajan et al’s work in “Inferring Stimulus Selectivity from the Spatial Structure of Neural Networks” and “Stimulus-Dependent Suppression of Chaos in Recurrent Neural Networks”, two papers from nearly a decade ago from which it borrows most of its approach and theoretical analysis. The work presented here is a logical extension of these papers, and in some respects lacks originality. That being said, the demonstration that merely aligning the phase of a sinusoidal input (as opposed to identifying a resonant frequency, as in Rajan et al) is adequate to improve stability is an interesting result. The theoretical analysis is not at all new, but does provide a nice way to understand the empirical results, and in its presentation here is more understandable to non-statistical physicists than in the original formulation. Specific comments: - “Anatomical studies demonstrate that brain”, “Brain actively…” -> most instances of “brain” should be changed to “the brain” - Given that this is a standard liquid state machine, it should be explicitly called that and some citations to the relevant literature (e.g. Maass, , *Real-time computing without stable states: a new framework for neural computation based on perturbations*) should also be provided. - It’s difficult to understand why the effect of changing input phase changes (granted, somewhat unpredictably) the orientation of the resulting circular oscillation. Providing some intuition would be helpful. In particular, why is the relationship between phase and subspace orientation in the left plot of Fig/ 1 (c) symmetric about 180°? - Reservoir model schematic from fig. 2 should be moved to fig. 1 - Fig. 1 (a), thetas appear to represent the normal vectors of the first two PCs of activity, which should be made explicit. - Equation (2): Notation is unclear. Basis vectors should either be arranged into matrices, or “SingularValueOf” should be changed to an inner product - Visualization of the chaotic activity in 3D PC space would be helpful, consider moving it out of the appendix. - Fig. 2 (d), dashed lines should be noted as direction vectors in caption
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInput Alignment along Chaotic directions increases Stability in Recurrent Neural Networks\nAnatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations. However, it remains unclear how neural circuits encode complex spatio-temporal patterns. We show that neural dynamics are strongly influenced by the phase alignment between the input and the spontaneous chaotic activity. Input alignment along the dominant chaotic projections causes the chaotic trajectories to become stable channels (or attractors), hence, improving the computational capability of a recurrent network. Using mean field analysis, we derive the impact of input alignment on the overall stability of attractors formed. Our results indicate that input alignment determines the extent of intrinsic noise suppression and hence, alters the attractor state stability, thereby controlling the network*s inference ability.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nOverview: Using a reservoir of recurrent, randomly-connected spiking neurons, this paper studies how input stimuli may be modified to produce more robust responses. Specifically, this entails adjusting the phase of a sinusoidal input such that the leading principal components of the variation in the reservoir’s response match the leading principal components of its spontaneous and unstimulated chaotic dynamics. The reservoir’s ability to discriminate different stimuli, even in the presence of noise, is shown to be improved by aligning them to different principal components of its chaotic dynamics, both between stimuli and within a single stimulus in the presence of noise. The authors provide a statistical physics-driven outline of the theoretical analysis underpinning this result, and qualitatively demonstrate the effectiveness of their approach to a simple handwriting generation task. This paper addresses an important question and provides a result that is of interest, however, the overlap between this and previous related work is quite significant. In particular, this paper is heavily indebted to Rajan et al’s work in “Inferring Stimulus Selectivity from the Spatial Structure of Neural Networks” and “Stimulus-Dependent Suppression of Chaos in Recurrent Neural Networks”, two papers from nearly a decade ago from which it borrows most of its approach and theoretical analysis. The work presented here is a logical extension of these papers, and in some respects lacks originality. That being said, the demonstration that merely aligning the phase of a sinusoidal input (as opposed to identifying a resonant frequency, as in Rajan et al) is adequate to improve stability is an interesting result. The theoretical analysis is not at all new, but does provide a nice way to understand the empirical results, and in its presentation here is more understandable to non-statistical physicists than in the original formulation. Specific comments: - “Anatomical studies demonstrate that brain”, “Brain actively…” -> most instances of “brain” should be changed to “the brain” - Given that this is a standard liquid state machine, it should be explicitly called that and some citations to the relevant literature (e.g. Maass, , *Real-time computing without stable states: a new framework for neural computation based on perturbations*) should also be provided. - It’s difficult to understand why the effect of changing input phase changes (granted, somewhat unpredictably) the orientation of the resulting circular oscillation. Providing some intuition would be helpful. In particular, why is the relationship between phase and subspace orientation in the left plot of Fig/ 1 (c) symmetric about 180°? - Reservoir model schematic from fig. 2 should be moved to fig. 1 - Fig. 1 (a), thetas appear to represent the normal vectors of the first two PCs of activity, which should be made explicit. - Equation (2): Notation is unclear. Basis vectors should either be arranged into matrices, or “SingularValueOf” should be changed to an inner product - Visualization of the chaotic activity in 3D PC space would be helpful, consider moving it out of the appendix. - Fig. 2 (d), dashed lines should be noted as direction vectors in caption", "role": "assistant" } ]
[ "Overview: Using a reservoir of recurrent, randomly-connected spiking neurons, this paper studies how input stimuli may be modified to produce more robust responses.", "Specifically, this entails adjusting the phase of a sinusoidal input such that the leading principal components of the variation in the reservoir’s response match the leading principal components of its spontaneous and unstimulated chaotic dynamics.", "The reservoir’s ability to discriminate different stimuli, even in the presence of noise, is shown to be improved by aligning them to different principal components of its chaotic dynamics, both between stimuli and within a single stimulus in the presence of noise.", "The authors provide a statistical physics-driven outline of the theoretical analysis underpinning this result, and qualitatively demonstrate the effectiveness of their approach to a simple handwriting generation task.", "This paper addresses an important question and provides a result that is of interest, however, the overlap between this and previous related work is quite significant.", "In particular, this paper is heavily indebted to Rajan et al’s work in “Inferring Stimulus Selectivity from the Spatial Structure of Neural Networks” and “Stimulus-Dependent Suppression of Chaos in Recurrent Neural Networks”, two papers from nearly a decade ago from which it borrows most of its approach and theoretical analysis.", "The work presented here is a logical extension of these papers, and in some respects lacks originality.", "That being said, the demonstration that merely aligning the phase of a sinusoidal input (as opposed to identifying a resonant frequency, as in Rajan et al) is adequate to improve stability is an interesting result.", "The theoretical analysis is not at all new, but does provide a nice way to understand the empirical results, and in its presentation here is more understandable to non-statistical physicists than in the original formulation.", "Specific comments: - “Anatomical studies demonstrate that brain”, “Brain actively…” -> most instances of “brain” should be changed to “the brain” - Given that this is a standard liquid state machine, it should be explicitly called that and some citations to the relevant literature (e.g.", "Maass, , *Real-time computing without stable states: a new framework for neural computation based on perturbations*) should also be provided.", "- It’s difficult to understand why the effect of changing input phase changes (granted, somewhat unpredictably) the orientation of the resulting circular oscillation.", "Providing some intuition would be helpful.", "In particular, why is the relationship between phase and subspace orientation in the left plot of Fig/ 1 (c) symmetric about 180°?", "- Reservoir model schematic from fig.", "2 should be moved to fig.", "1 - Fig.", "1 (a), thetas appear to represent the normal vectors of the first two PCs of activity, which should be made explicit.", "- Equation (2): Notation is unclear.", "Basis vectors should either be arranged into matrices, or “SingularValueOf” should be changed to an inner product - Visualization of the chaotic activity in 3D PC space would be helpful, consider moving it out of the appendix.", "- Fig.", "2 (d), dashed lines should be noted as direction vectors in caption" ]
{ "criticism": 4, "example": 5, "importance_and_relevance": 5, "materials_and_methods": 9, "praise": 3, "presentation_and_reporting": 9, "results_and_discussion": 7, "suggestion_and_solution": 8, "total": 22 }
0.181818
0.227273
0.227273
0.409091
0.136364
0.409091
0.318182
0.363636
22
1
1
1
1
1
1
1
1
2.272727
0.99435
1.278377
A novel text representation which enables image classifiers to perform text classification
We introduce a novel method for converting text data into abstract image representations, which allows image-based processing techniques (e.g. image classification networks) to be applied to text-based comparison problems. We apply the technique to entity disambiguation of inventor names in US patents. The method involves converting text from each pairwise comparison between two inventor name records into a 2D RGB (stacked) image representation. We then train an image classification neural network to discriminate between such pairwise comparison images, and use the trained network to label each pair of records as either matched (same inventor) or non-matched (different inventors), obtaining highly accurate results (F1: 99.09%, precision: 99.41%, recall: 98.76%). Our new text-to-image representation method could potentially be used more broadly for other NLP comparison problems, such as disambiguation of academic publications, or for problems that require simultaneous classification of both text and images.
This paper proposes a method to map a pair of textual information into a 2D RGB image. They achieve this by heuristically creating a character map in a 2D matrix and then add color according to the character within the text (e.g. a line connecting bi-gram characters). To compare 2 textual pieces, different colors are used. This novel “text representation” can be fed to 2D convolutional neural networks (image classifiers). The proposed representation is evaluated on a patent inventor disambiguation task, and shown to slightly our perform existing methods. I would not recommend this paper because the method in this paper is not well motivated and the empirical results do not demonstrate the usefulness of the proposed method. In addition, the quality of writing can be improved in several places. The goal to capture string similarity (character overlaps) is well motivated for the name disambiguation task, but it is unfounded on why a 2D mapping would provide additional benefits from the existing features. The paper could be improved by discussing the strength and weaknesses between the existing features and the proposed representation. In the experiment, it is shown that a simple string similarity feature and a random forest (Kim et al., 2016) already achieve 99% in the F1 score. A small increment of 0.09 in the F1 score cannot justify the proposed algorithm. I think the author should select other tasks altogether. Finally, the fact that any random character map gives the same result is quite counter-intuitive and requires more analysis such as ablation on coloring (connecting lines, blue color, …). As for the writing, there are several issues. First, the format of the table is incorrect. Some tables are out of the border and the captions are on the top. More importantly, the method to construct the character map is unclear and a significant part of the map is left in the appendix.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA novel text representation which enables image classifiers to perform text classification\nWe introduce a novel method for converting text data into abstract image representations, which allows image-based processing techniques (e.g. image classification networks) to be applied to text-based comparison problems. We apply the technique to entity disambiguation of inventor names in US patents. The method involves converting text from each pairwise comparison between two inventor name records into a 2D RGB (stacked) image representation. We then train an image classification neural network to discriminate between such pairwise comparison images, and use the trained network to label each pair of records as either matched (same inventor) or non-matched (different inventors), obtaining highly accurate results (F1: 99.09%, precision: 99.41%, recall: 98.76%). Our new text-to-image representation method could potentially be used more broadly for other NLP comparison problems, such as disambiguation of academic publications, or for problems that require simultaneous classification of both text and images.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a method to map a pair of textual information into a 2D RGB image. They achieve this by heuristically creating a character map in a 2D matrix and then add color according to the character within the text (e.g. a line connecting bi-gram characters). To compare 2 textual pieces, different colors are used. This novel “text representation” can be fed to 2D convolutional neural networks (image classifiers). The proposed representation is evaluated on a patent inventor disambiguation task, and shown to slightly our perform existing methods. I would not recommend this paper because the method in this paper is not well motivated and the empirical results do not demonstrate the usefulness of the proposed method. In addition, the quality of writing can be improved in several places. The goal to capture string similarity (character overlaps) is well motivated for the name disambiguation task, but it is unfounded on why a 2D mapping would provide additional benefits from the existing features. The paper could be improved by discussing the strength and weaknesses between the existing features and the proposed representation. In the experiment, it is shown that a simple string similarity feature and a random forest (Kim et al., 2016) already achieve 99% in the F1 score. A small increment of 0.09 in the F1 score cannot justify the proposed algorithm. I think the author should select other tasks altogether. Finally, the fact that any random character map gives the same result is quite counter-intuitive and requires more analysis such as ablation on coloring (connecting lines, blue color, …). As for the writing, there are several issues. First, the format of the table is incorrect. Some tables are out of the border and the captions are on the top. More importantly, the method to construct the character map is unclear and a significant part of the map is left in the appendix.", "role": "assistant" } ]
[ "This paper proposes a method to map a pair of textual information into a 2D RGB image.", "They achieve this by heuristically creating a character map in a 2D matrix and then add color according to the character within the text (e.g.", "a line connecting bi-gram characters).", "To compare 2 textual pieces, different colors are used.", "This novel “text representation” can be fed to 2D convolutional neural networks (image classifiers).", "The proposed representation is evaluated on a patent inventor disambiguation task, and shown to slightly our perform existing methods.", "I would not recommend this paper because the method in this paper is not well motivated and the empirical results do not demonstrate the usefulness of the proposed method.", "In addition, the quality of writing can be improved in several places.", "The goal to capture string similarity (character overlaps) is well motivated for the name disambiguation task, but it is unfounded on why a 2D mapping would provide additional benefits from the existing features.", "The paper could be improved by discussing the strength and weaknesses between the existing features and the proposed representation.", "In the experiment, it is shown that a simple string similarity feature and a random forest (Kim et al., 2016) already achieve 99% in the F1 score.", "A small increment of 0.09 in the F1 score cannot justify the proposed algorithm.", "I think the author should select other tasks altogether.", "Finally, the fact that any random character map gives the same result is quite counter-intuitive and requires more analysis such as ablation on coloring (connecting lines, blue color, …).", "As for the writing, there are several issues.", "First, the format of the table is incorrect.", "Some tables are out of the border and the captions are on the top.", "More importantly, the method to construct the character map is unclear and a significant part of the map is left in the appendix." ]
{ "criticism": 7, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 9, "praise": 1, "presentation_and_reporting": 12, "results_and_discussion": 4, "suggestion_and_solution": 4, "total": 18 }
0.388889
0.055556
0.055556
0.5
0.055556
0.666667
0.222222
0.222222
18
1
1
1
1
1
1
1
1
2.166667
1.772106
0.394561
Imbalanced Classification via Adversarial Minority Over-sampling
In most real-world scenarios, training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion. In this paper, we explore a novel yet simple way to alleviate this issue via synthesizing less-frequent classes with adversarial examples of other classes. Surprisingly, we found this counter-intuitive method can effectively learn generalizable features of minority classes by transferring and leveraging the diversity of the majority information. Our experimental results on various types of class-imbalanced datasets in image classification and natural language processing show that the proposed method not only improves the generalization of minority classes significantly compared to other re-sampling or re-weighting methods, but also surpasses other methods of state-of-art level for the class-imbalanced classification.
The authors propose to tackle imbalance classification using re-sampling methods. The idea is to generate adversarial examples of a model trained on the imbalanced dataset, in the least frequent classes. To the best of my understanding the rational behind this is that models usually overfit the least frequent class. Hence adversarial examples in the minority class would help to train a new model that generalize better. Q1. If a model is trained to perform well on the class-imbalanced dataset and is robust to adversarial attacks / does not overfit the minority classes, what would happen? My decision is weak reject for the following reasons. While the idea seems appealing, intuitive ideas on why such a methodology should work. Moreover rather than trying to give more understanding on why such a methodology works, the experiments only illustrate that the propose algorithm outperform the state of the art. While such positive results are always welcome, it is also useful to propose experiments to gain a better understanding on the performances/limits of an algorithm. Also, the proposed algorithm depends on some hyper-parameters that are never empirically nor theoretically studied. Suggestions ----------------- while it is nice to report standard deviations, the authors could increase the number of runs to make the differences more significant and eventually provide a statistical test.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nImbalanced Classification via Adversarial Minority Over-sampling\nIn most real-world scenarios, training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion. In this paper, we explore a novel yet simple way to alleviate this issue via synthesizing less-frequent classes with adversarial examples of other classes. Surprisingly, we found this counter-intuitive method can effectively learn generalizable features of minority classes by transferring and leveraging the diversity of the majority information. Our experimental results on various types of class-imbalanced datasets in image classification and natural language processing show that the proposed method not only improves the generalization of minority classes significantly compared to other re-sampling or re-weighting methods, but also surpasses other methods of state-of-art level for the class-imbalanced classification.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose to tackle imbalance classification using re-sampling methods. The idea is to generate adversarial examples of a model trained on the imbalanced dataset, in the least frequent classes. To the best of my understanding the rational behind this is that models usually overfit the least frequent class. Hence adversarial examples in the minority class would help to train a new model that generalize better. Q1. If a model is trained to perform well on the class-imbalanced dataset and is robust to adversarial attacks / does not overfit the minority classes, what would happen? My decision is weak reject for the following reasons. While the idea seems appealing, intuitive ideas on why such a methodology should work. Moreover rather than trying to give more understanding on why such a methodology works, the experiments only illustrate that the propose algorithm outperform the state of the art. While such positive results are always welcome, it is also useful to propose experiments to gain a better understanding on the performances/limits of an algorithm. Also, the proposed algorithm depends on some hyper-parameters that are never empirically nor theoretically studied. Suggestions ----------------- while it is nice to report standard deviations, the authors could increase the number of runs to make the differences more significant and eventually provide a statistical test.", "role": "assistant" } ]
[ "The authors propose to tackle imbalance classification using re-sampling methods.", "The idea is to generate adversarial examples of a model trained on the imbalanced dataset, in the least frequent classes.", "To the best of my understanding the rational behind this is that models usually overfit the least frequent class.", "Hence adversarial examples in the minority class would help to train a new model that generalize better.", "Q1.", "If a model is trained to perform well on the class-imbalanced dataset and is robust to adversarial attacks / does not overfit the minority classes, what would happen?", "My decision is weak reject for the following reasons.", "While the idea seems appealing, intuitive ideas on why such a methodology should work.", "Moreover rather than trying to give more understanding on why such a methodology works, the experiments only illustrate that the propose algorithm outperform the state of the art.", "While such positive results are always welcome, it is also useful to propose experiments to gain a better understanding on the performances/limits of an algorithm.", "Also, the proposed algorithm depends on some hyper-parameters that are never empirically nor theoretically studied.", "Suggestions ----------------- while it is nice to report standard deviations, the authors could increase the number of runs to make the differences more significant and eventually provide a statistical test." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 10, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 4, "suggestion_and_solution": 3, "total": 12 }
0.166667
0
0.333333
0.833333
0.166667
0.083333
0.333333
0.25
12
1
0
1
1
1
1
1
1
2.166667
2.150884
0.015782
AdamT: A Stochastic Optimization with Trend Correction Scheme
Adam-typed optimizers, as a class of adaptive moment estimation methods with the exponential moving average scheme, have been successfully used in many applications of deep learning. Such methods are appealing for capability on large-scale sparse datasets. On top of that, they are computationally efficient and insensitive to the hyper-parameter settings. In this paper, we present a new framework for adapting Adam-typed methods, namely AdamT. Instead of applying a simple exponential weighted average, AdamT also includes the trend information when updating the parameters with the adaptive step size and gradients. The newly added term is expected to efficiently capture the non-horizontal moving patterns on the cost surface, and thus converge more rapidly. We show empirically the importance of the trend component, where AdamT outperforms the conventional Adam method constantly in both convex and non-convex settings.
The experimental results of this paper are not convincing. i) One should use much more powerful networks. ii) Different variants of Adam should be considered, also with different learning rate schedules. iii) One should study how newly introduced hyperparameters affect the results of AdamT. I find it hard to trust your conclusions like *Empirical results demonstrate that our method, AdamT, works well in practice and constantly beats the baseline method Adam* after inspecting Figure 3. Even much greater difference is performance can be due to a small change of hyperparameters, here you have several new hyperparameters.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdamT: A Stochastic Optimization with Trend Correction Scheme\nAdam-typed optimizers, as a class of adaptive moment estimation methods with the exponential moving average scheme, have been successfully used in many applications of deep learning. Such methods are appealing for capability on large-scale sparse datasets. On top of that, they are computationally efficient and insensitive to the hyper-parameter settings. In this paper, we present a new framework for adapting Adam-typed methods, namely AdamT. Instead of applying a simple exponential weighted average, AdamT also includes the trend information when updating the parameters with the adaptive step size and gradients. The newly added term is expected to efficiently capture the non-horizontal moving patterns on the cost surface, and thus converge more rapidly. We show empirically the importance of the trend component, where AdamT outperforms the conventional Adam method constantly in both convex and non-convex settings.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe experimental results of this paper are not convincing. i) One should use much more powerful networks. ii) Different variants of Adam should be considered, also with different learning rate schedules. iii) One should study how newly introduced hyperparameters affect the results of AdamT. I find it hard to trust your conclusions like *Empirical results demonstrate that our method, AdamT, works well in practice and constantly beats the baseline method Adam* after inspecting Figure 3. Even much greater difference is performance can be due to a small change of hyperparameters, here you have several new hyperparameters.", "role": "assistant" } ]
[ "The experimental results of this paper are not convincing.", "i) One should use much more powerful networks.", "ii) Different variants of Adam should be considered, also with different learning rate schedules.", "iii) One should study how newly introduced hyperparameters affect the results of AdamT.", "I find it hard to trust your conclusions like *Empirical results demonstrate that our method, AdamT, works well in practice and constantly beats the baseline method Adam* after inspecting Figure 3.", "Even much greater difference is performance can be due to a small change of hyperparameters, here you have several new hyperparameters." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 0, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 6 }
0.333333
0.166667
0
1
0.166667
0
0.5
0.5
6
1
1
0
1
1
0
1
1
2.666667
1.893328
0.773339
Tree-structured Attention Module for Image Classification
Recent studies in attention modules have enabled higher performance in computer vision tasks by capturing global contexts and accordingly attending important features. In this paper, we propose a simple and highly parametrically efficient module named Tree-structured Attention Module (TAM) which recursively encourages neighboring channels to collaborate in order to produce a spatial attention map as an output. Unlike other attention modules which try to capture long-range dependencies at each channel, our module focuses on imposing non-linearities be- tween channels by utilizing point-wise group convolution. This module not only strengthens representational power of a model but also acts as a gate which controls signal flow. Our module allows a model to achieve higher performance in a highly parameter-efficient manner. We empirically validate the effectiveness of our module with extensive experiments on CIFAR-10/100 and SVHN datasets. With our proposed attention module employed, ResNet50 and ResNet101 models gain 2.3% and 1.2% accuracy improvement with less than 1.5% parameter over- head. Our PyTorch implementation code is publicly available.
The authors propose a simple and highly parametrically efficient module named TAM, recursively encouraging neighbouring channels to collaborate in order to produce a spatial attention map as an output. The paper offers a good read and diagrams are informative. Below are comments for improvement and clarification. - The authors claim the effectiveness of capturing inter-channel relations based on performance but it would be great if they can illustrate inter-channel relations. - Also, since the authors mentioned other works of attention modules (NLNet, GCNet, BAM, CBAM), it would be good to compare with them (although they compared with CBAM) - In Figure 6, on ResNet-50, TAM shows faster convergence and lower Top-1 Error but on ResNext-50, it is hard to see the difference. Any comment on this? - It would be interesting to see any segmentation task and performance comparison by imposing nonlinearities between neighboring channels and considering inter-channel relations.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTree-structured Attention Module for Image Classification\nRecent studies in attention modules have enabled higher performance in computer vision tasks by capturing global contexts and accordingly attending important features. In this paper, we propose a simple and highly parametrically efficient module named Tree-structured Attention Module (TAM) which recursively encourages neighboring channels to collaborate in order to produce a spatial attention map as an output. Unlike other attention modules which try to capture long-range dependencies at each channel, our module focuses on imposing non-linearities be- tween channels by utilizing point-wise group convolution. This module not only strengthens representational power of a model but also acts as a gate which controls signal flow. Our module allows a model to achieve higher performance in a highly parameter-efficient manner. We empirically validate the effectiveness of our module with extensive experiments on CIFAR-10/100 and SVHN datasets. With our proposed attention module employed, ResNet50 and ResNet101 models gain 2.3% and 1.2% accuracy improvement with less than 1.5% parameter over- head. Our PyTorch implementation code is publicly available.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a simple and highly parametrically efficient module named TAM, recursively encouraging neighbouring channels to collaborate in order to produce a spatial attention map as an output. The paper offers a good read and diagrams are informative. Below are comments for improvement and clarification. - The authors claim the effectiveness of capturing inter-channel relations based on performance but it would be great if they can illustrate inter-channel relations. - Also, since the authors mentioned other works of attention modules (NLNet, GCNet, BAM, CBAM), it would be good to compare with them (although they compared with CBAM) - In Figure 6, on ResNet-50, TAM shows faster convergence and lower Top-1 Error but on ResNext-50, it is hard to see the difference. Any comment on this? - It would be interesting to see any segmentation task and performance comparison by imposing nonlinearities between neighboring channels and considering inter-channel relations.", "role": "assistant" } ]
[ "The authors propose a simple and highly parametrically efficient module named TAM, recursively encouraging neighbouring channels to collaborate in order to produce a spatial attention map as an output.", "The paper offers a good read and diagrams are informative.", "Below are comments for improvement and clarification.", "- The authors claim the effectiveness of capturing inter-channel relations based on performance but it would be great if they can illustrate inter-channel relations.", "- Also, since the authors mentioned other works of attention modules (NLNet, GCNet, BAM, CBAM), it would be good to compare with them (although they compared with CBAM) - In Figure 6, on ResNet-50, TAM shows faster convergence and lower Top-1 Error but on ResNext-50, it is hard to see the difference.", "Any comment on this?", "- It would be interesting to see any segmentation task and performance comparison by imposing nonlinearities between neighboring channels and considering inter-channel relations." ]
{ "criticism": 1, "example": 1, "importance_and_relevance": 0, "materials_and_methods": 2, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 4, "suggestion_and_solution": 4, "total": 7 }
0.142857
0.142857
0
0.285714
0.142857
0.285714
0.571429
0.571429
7
1
1
0
1
1
1
1
1
2.142857
1.57469
0.568167
Dynamically Balanced Value Estimates for Actor-Critic Methods
Reinforcement learning in an actor-critic setting relies on accurate value estimates of the critic. However, the combination of function approximation, temporal difference (TD) learning and off-policy training can lead to an overestimating value function. A solution is to use Clipped Double Q-learning (CDQ), which is used in the TD3 algorithm and computes the minimum of two critics in the TD-target. We show that CDQ induces an underestimation bias and propose a new algorithm that accounts for this by using a weighted average of the target from CDQ and the target coming from a single critic. The weighting parameter is adjusted during training such that the value estimates match the actual discounted return on the most recent episodes and by that it balances over- and underestimation. Empirically, we obtain more accurate value estimates and demonstrate state of the art results on several OpenAI gym tasks.
This paper proposes the balanced clipped double estimator that uses a weighted averaged of the target from CDQ (leading to underestimations) and the target from a single critic (causing overestimations), to better estimate the Q value. The weight is adjusted by a simple heuristic during the training. Experimental results show that the BTD3 algorithm outperforms TD3, SAC, DDPG on six MuJoCo environments. How to get accurate function approximations is a very important problem. The idea that using a weight to balance the target value from CDQ and the target value from the single critic is welled motivated and interesting. Figure 1 gives a good motivation example to show how BCDQ reduces the estimator error of Q value. I appreciate that the authors plot the detail of beta in the training in Figure 2. The experimental results are somewhat weak as there are no results on Swimmer and HumanoidStandup. And I do not think BTD3 significantly outperforms other baselines in Ant, Walker2d, and Reacher. The paper would be more convincing with the result of BTD3 with fixed beta in Figure 1. Questions: 1. Are there any statistics of the estimation errors of TD3, DDPG, and BTD3 in all environments?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDynamically Balanced Value Estimates for Actor-Critic Methods\nReinforcement learning in an actor-critic setting relies on accurate value estimates of the critic. However, the combination of function approximation, temporal difference (TD) learning and off-policy training can lead to an overestimating value function. A solution is to use Clipped Double Q-learning (CDQ), which is used in the TD3 algorithm and computes the minimum of two critics in the TD-target. We show that CDQ induces an underestimation bias and propose a new algorithm that accounts for this by using a weighted average of the target from CDQ and the target coming from a single critic. The weighting parameter is adjusted during training such that the value estimates match the actual discounted return on the most recent episodes and by that it balances over- and underestimation. Empirically, we obtain more accurate value estimates and demonstrate state of the art results on several OpenAI gym tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes the balanced clipped double estimator that uses a weighted averaged of the target from CDQ (leading to underestimations) and the target from a single critic (causing overestimations), to better estimate the Q value. The weight is adjusted by a simple heuristic during the training. Experimental results show that the BTD3 algorithm outperforms TD3, SAC, DDPG on six MuJoCo environments. How to get accurate function approximations is a very important problem. The idea that using a weight to balance the target value from CDQ and the target value from the single critic is welled motivated and interesting. Figure 1 gives a good motivation example to show how BCDQ reduces the estimator error of Q value. I appreciate that the authors plot the detail of beta in the training in Figure 2. The experimental results are somewhat weak as there are no results on Swimmer and HumanoidStandup. And I do not think BTD3 significantly outperforms other baselines in Ant, Walker2d, and Reacher. The paper would be more convincing with the result of BTD3 with fixed beta in Figure 1. Questions: 1. Are there any statistics of the estimation errors of TD3, DDPG, and BTD3 in all environments?", "role": "assistant" } ]
[ "This paper proposes the balanced clipped double estimator that uses a weighted averaged of the target from CDQ (leading to underestimations) and the target from a single critic (causing overestimations), to better estimate the Q value.", "The weight is adjusted by a simple heuristic during the training.", "Experimental results show that the BTD3 algorithm outperforms TD3, SAC, DDPG on six MuJoCo environments.", "How to get accurate function approximations is a very important problem.", "The idea that using a weight to balance the target value from CDQ and the target value from the single critic is welled motivated and interesting.", "Figure 1 gives a good motivation example to show how BCDQ reduces the estimator error of Q value.", "I appreciate that the authors plot the detail of beta in the training in Figure 2.", "The experimental results are somewhat weak as there are no results on Swimmer and HumanoidStandup.", "And I do not think BTD3 significantly outperforms other baselines in Ant, Walker2d, and Reacher.", "The paper would be more convincing with the result of BTD3 with fixed beta in Figure 1.", "Questions: 1.", "Are there any statistics of the estimation errors of TD3, DDPG, and BTD3 in all environments?" ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 7, "praise": 3, "presentation_and_reporting": 2, "results_and_discussion": 7, "suggestion_and_solution": 3, "total": 12 }
0.166667
0.083333
0.166667
0.583333
0.25
0.166667
0.583333
0.25
12
1
1
1
1
1
1
1
1
2.25
2.234218
0.015782
Anomalous Pattern Detection in Activations and Reconstruction Error of Autoencoders
In real-world machine learning applications, large outliers and pervasive noise are commonplace, and access to clean training data as required by standard deep autoencoders is unlikely. Reliably detecting anomalies in a given set of images is a task of high practical relevance for visual quality inspection, surveillance, or medical image analysis. Autoencoder neural networks learn to reconstruct normal images, and hence can classify those images as anomalous if the reconstruction error exceeds some threshold. In this paper, we proposed an unsupervised method based on subset scanning over autoencoder activations. The contributions of our work are threefold. First, we propose a novel method combining detection with reconstruction error and subset scanning scores to improve the anomaly score of current autoencoders without requiring any retraining. Second, we provide the ability to inspect and visualize the set of anomalous nodes in the reconstruction error space that make a sample noised. Third, we show that subset scanning can be used for anomaly detection in the inner layers of the autoencoder. We provide detection power results for several untargeted adversarial noise models under standard datasets.
I read the paper and found the approach of treating the adversarial detection problem as anomalous pattern detection quite interesting. The authors do a good job of explaining the relative work, the methods their work builds upon. The analysis provided on various layers (Table 1) is quite insightful. However, I am not convinced by the results and found them hard to compare. The datasets used are also MNIST or Fashion MNIST and doesn*t really capture the real case / challenging scenario for the anomaly detection.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAnomalous Pattern Detection in Activations and Reconstruction Error of Autoencoders\nIn real-world machine learning applications, large outliers and pervasive noise are commonplace, and access to clean training data as required by standard deep autoencoders is unlikely. Reliably detecting anomalies in a given set of images is a task of high practical relevance for visual quality inspection, surveillance, or medical image analysis. Autoencoder neural networks learn to reconstruct normal images, and hence can classify those images as anomalous if the reconstruction error exceeds some threshold. In this paper, we proposed an unsupervised method based on subset scanning over autoencoder activations. The contributions of our work are threefold. First, we propose a novel method combining detection with reconstruction error and subset scanning scores to improve the anomaly score of current autoencoders without requiring any retraining. Second, we provide the ability to inspect and visualize the set of anomalous nodes in the reconstruction error space that make a sample noised. Third, we show that subset scanning can be used for anomaly detection in the inner layers of the autoencoder. We provide detection power results for several untargeted adversarial noise models under standard datasets.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI read the paper and found the approach of treating the adversarial detection problem as anomalous pattern detection quite interesting. The authors do a good job of explaining the relative work, the methods their work builds upon. The analysis provided on various layers (Table 1) is quite insightful. However, I am not convinced by the results and found them hard to compare. The datasets used are also MNIST or Fashion MNIST and doesn*t really capture the real case / challenging scenario for the anomaly detection.", "role": "assistant" } ]
[ "I read the paper and found the approach of treating the adversarial detection problem as anomalous pattern detection quite interesting.", "The authors do a good job of explaining the relative work, the methods their work builds upon.", "The analysis provided on various layers (Table 1) is quite insightful.", "However, I am not convinced by the results and found them hard to compare.", "The datasets used are also MNIST or Fashion MNIST and doesn*t really capture the real case / challenging scenario for the anomaly detection." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 4, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 5 }
0.4
0
0.2
0.8
0.6
0.2
0.2
0
5
1
0
1
1
1
1
1
0
2.4
1.389924
1.010076
Hierarchical Complement Objective Training
Hierarchical label structures widely exist in many machine learning tasks, ranging from those with explicit label hierarchies such as image classification to the ones that have latent label hierarchies such as semantic segmentation. Unfortunately, state-of-the-art methods often utilize cross-entropy loss which in-explicitly assumes the independence among class labels. Motivated by the fact that class members from the same hierarchy need to be similar to each others, we design a new training diagram called Hierarchical Complement Objective Training (HCOT). In HCOT, in addition to maximizing the probability of the ground truth class, we also neutralize the probabilities of rest of the classes in a hierarchical fashion, making the model take advantage of the label hierarchy explicitly. We conduct our method on both image classification and semantic segmentation. Results show that HCOT outperforms state-of-the-art models in CIFAR100, Imagenet, and PASCAL-context. Our experiments also demonstrate that HCOT can be applied on tasks with latent label hierarchies, which is a common characteristic in many machine learning tasks.
While the general direction of this work is nice it is unfortunate that the authors did not compare to the many related approaches. It is unclear how the approach would be used for tree structures like wordnet and imagenet. Unclear novelty due to lack of comparisons to many very related approaches.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHierarchical Complement Objective Training\nHierarchical label structures widely exist in many machine learning tasks, ranging from those with explicit label hierarchies such as image classification to the ones that have latent label hierarchies such as semantic segmentation. Unfortunately, state-of-the-art methods often utilize cross-entropy loss which in-explicitly assumes the independence among class labels. Motivated by the fact that class members from the same hierarchy need to be similar to each others, we design a new training diagram called Hierarchical Complement Objective Training (HCOT). In HCOT, in addition to maximizing the probability of the ground truth class, we also neutralize the probabilities of rest of the classes in a hierarchical fashion, making the model take advantage of the label hierarchy explicitly. We conduct our method on both image classification and semantic segmentation. Results show that HCOT outperforms state-of-the-art models in CIFAR100, Imagenet, and PASCAL-context. Our experiments also demonstrate that HCOT can be applied on tasks with latent label hierarchies, which is a common characteristic in many machine learning tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nWhile the general direction of this work is nice it is unfortunate that the authors did not compare to the many related approaches. It is unclear how the approach would be used for tree structures like wordnet and imagenet. Unclear novelty due to lack of comparisons to many very related approaches.", "role": "assistant" } ]
[ "While the general direction of this work is nice it is unfortunate that the authors did not compare to the many related approaches.", "It is unclear how the approach would be used for tree structures like wordnet and imagenet.", "Unclear novelty due to lack of comparisons to many very related approaches." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 3, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 0, "suggestion_and_solution": 0, "total": 3 }
1
0
0.333333
1
0.333333
0.333333
0
0
3
1
0
1
1
1
1
0
0
3
1.421757
1.578243
Bridging ELBO objective and MMD
One of the challenges in training generative models such as the variational auto encoder (VAE) is avoiding posterior collapse. When the generator has too much capacity, it is prone to ignoring latent code. This problem is exacerbated when the dataset is small, and the latent dimension is high. The root of the problem is the ELBO objective, specifically the Kullback–Leibler (KL) divergence term in objective function. This paper proposes a new objective function to replace the KL term with one that emulates the maximum mean discrepancy (MMD) objective. It also introduces a new technique, named latent clipping, that is used to control distance between samples in latent space. A probabilistic autoencoder model, named -VAE, is designed and trained on MNIST and MNIST Fashion datasets, using the new objective function and is shown to outperform models trained with ELBO and -VAE objective. The -VAE is less prone to posterior collapse, and can generate reconstructions and new samples in good quality. Latent representations learned by -VAE are shown to be good and can be used for downstream tasks such as classification.
This paper proposes to modify the ELBO loss for VAE in order to learn better latent representation of the data. There is a large literature on this topic and the related work section should be written with more care and details. Some references are missing: [a] Fixing a Broken ELBO by Alemi et al. [b] Learning Disentangled Joint Continuous and Discrete Representations by Dupont On a general level, the motivation for this work is not completely clear. As mentioned in the related work section, one could use a more complex prior such as a mixture of Gaussians or an architecture like [b] to do clustering. At least, in the experiment section, results shuold be compared with these natrual approaches. It should be easy as [b] for example provided his code and gives results on the same datasets as those used here. Another remark is that the title is quite misleading. The connection with MMD is very weak in my opinion. Indeed modifing the KL term in the loss by using MMD has already been proposed in Infovae (last reference of this paper) and in InfoVAE a MMD-VAE is also introduced. This connection should be explained and comparison should be given with this work.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBridging ELBO objective and MMD\nOne of the challenges in training generative models such as the variational auto encoder (VAE) is avoiding posterior collapse. When the generator has too much capacity, it is prone to ignoring latent code. This problem is exacerbated when the dataset is small, and the latent dimension is high. The root of the problem is the ELBO objective, specifically the Kullback–Leibler (KL) divergence term in objective function. This paper proposes a new objective function to replace the KL term with one that emulates the maximum mean discrepancy (MMD) objective. It also introduces a new technique, named latent clipping, that is used to control distance between samples in latent space. A probabilistic autoencoder model, named -VAE, is designed and trained on MNIST and MNIST Fashion datasets, using the new objective function and is shown to outperform models trained with ELBO and -VAE objective. The -VAE is less prone to posterior collapse, and can generate reconstructions and new samples in good quality. Latent representations learned by -VAE are shown to be good and can be used for downstream tasks such as classification.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to modify the ELBO loss for VAE in order to learn better latent representation of the data. There is a large literature on this topic and the related work section should be written with more care and details. Some references are missing: [a] Fixing a Broken ELBO by Alemi et al. [b] Learning Disentangled Joint Continuous and Discrete Representations by Dupont On a general level, the motivation for this work is not completely clear. As mentioned in the related work section, one could use a more complex prior such as a mixture of Gaussians or an architecture like [b] to do clustering. At least, in the experiment section, results shuold be compared with these natrual approaches. It should be easy as [b] for example provided his code and gives results on the same datasets as those used here. Another remark is that the title is quite misleading. The connection with MMD is very weak in my opinion. Indeed modifing the KL term in the loss by using MMD has already been proposed in Infovae (last reference of this paper) and in InfoVAE a MMD-VAE is also introduced. This connection should be explained and comparison should be given with this work.", "role": "assistant" } ]
[ "This paper proposes to modify the ELBO loss for VAE in order to learn better latent representation of the data.", "There is a large literature on this topic and the related work section should be written with more care and details.", "Some references are missing: [a] Fixing a Broken ELBO by Alemi et al.", "[b] Learning Disentangled Joint Continuous and Discrete Representations by Dupont On a general level, the motivation for this work is not completely clear.", "As mentioned in the related work section, one could use a more complex prior such as a mixture of Gaussians or an architecture like [b] to do clustering.", "At least, in the experiment section, results shuold be compared with these natrual approaches.", "It should be easy as [b] for example provided his code and gives results on the same datasets as those used here.", "Another remark is that the title is quite misleading.", "The connection with MMD is very weak in my opinion.", "Indeed modifing the KL term in the loss by using MMD has already been proposed in Infovae (last reference of this paper) and in InfoVAE a MMD-VAE is also introduced.", "This connection should be explained and comparison should be given with this work." ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 0, "presentation_and_reporting": 7, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 11 }
0.363636
0.090909
0.181818
0.363636
0
0.636364
0.272727
0.363636
11
1
1
1
1
0
1
1
1
2.272727
2.209598
0.06313
Dataset Distillation
Model distillation aims to distill the knowledge of a complex model into a simpler one. In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one. The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data. For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic distilled images (one per class) and achieve close to the original performance, given a fixed network initialization. We evaluate our method in various initialization settings. Experiments on multiple datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, demonstrate the ad-vantage of our approach compared to alternative methods. Finally, we include a real-world application of dataset distillation to the continual learning setting: we show that storing distilled images as episodic memory of previous tasks can alleviate forgetting more effectively than real images.
This paper proposes to distill a dataset into a small set of artifical images, which can be used to then train a neural network. The proposed method of doing this is based on a meta-learning flavored formulation where a GD step is done on the synthetic images and then the loss computed on the real images. The procedure appears to overfit to the initialization of the classification model and thus to resolve this the authors randomly resample the initialization throughout training. Experimental results compare the method to other coreset construction strategies and random sampling. Applications are also shown to continual learning. I think the general idea of this paper and the questions it poses are very interesting and in fact a rather underexplored area however the experimental results are quite underwhelming. The original questions posed in the paper are not really answered (can we distill a dataset?). The experiments only demonstrate that an expensive procedure that trains a model can produce synthetic images that give higher performance than a randomly selected set of images. Comments regarding experiments from Sec 4.1 The results indeed show the performance is greatly improved over using random images from the dataset, however obtaining 45% on CIFAR-10 would hardly be considered “distilling the dataset” as the full set of data yields performances of 90%+ on this task. I would be more interested to see how many samples are needed to distill the dataset such that it reaches reasonably performance. It is not completely clear why the authors didn’t do this kind of analysis but it might be due to the heavy computational cost of this method, in which case it would be good to discuss that. Similarly for MNIST the performance is quite low (effective use of regularization and can yield similar performance with a 100 images), kmeans++ based selection gives quite close results to the synthetic procedure for MNIST. I think its also a bit misleading that the authors advertise the fixed initialization performance in the abstract, this performance in my mind is mainly useful for the purposes of analyzing the method. I would also be interested to know if the coreset construction baselines can be improved. For example the kmeans++, why couldnt it be done in the feature space of a trained model (since synthetic method also trains a model). I am not an expert in the area of coresets but it would be good to know what is the state-of-the-art. A potentially interesting coreset consturction approach that also requires training a classifier can be found in this ICLR 2019 paper https://arxiv.org/pdf/1812.05159.pdf Comments regarding experiments for the continual learning application: I posted a separate comment regarding this set of experiments, which I hope the authors can address. Currently it is not 100% clear what is done. For now I will go under the assumptions that the authors are ingesting the entire dataset for each new task, using this to get their distilled data. This seems like an unfair baseline to me for the GEM setup which sees each data point only once and a for a single iteration. The authors introduce an additional step which requires multiple iterations over the whole dataset to obtain the synthetic set of images. This may be more appropriate for comparison in another continual learning setting. A few other observations/suggestions - In my understanding the label of the synthetic data is currently fixed. It would be interesting if this can also be adapted or alternatively if a fixed soft label (e.g 0.3 one class 0.7) can be assigned. - I would be interested to know if the target architecuture were different would the results change? --- Addendum: After writing this review the authors have responded to my concerns regarding the CL experiments and have begun to recompute some of the experiments using a more rigorous comparison with promising results on permuted mnist. If the CL section is clarified in the revised version (I suggest besides more decriptions in the main text to perhaps add an explicit algorithm block in the appendix) and the more rigorous comparison (single epoch to signl epoch) extend to cifar100 I will consider increasing my score as the method will be competitive for CL. I do still think the non-CL experiments (which make up the bulk of the paper are somewhat underwhelming as discussed in the review)
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDataset Distillation\nModel distillation aims to distill the knowledge of a complex model into a simpler one. In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one. The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data. For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic distilled images (one per class) and achieve close to the original performance, given a fixed network initialization. We evaluate our method in various initialization settings. Experiments on multiple datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, demonstrate the ad-vantage of our approach compared to alternative methods. Finally, we include a real-world application of dataset distillation to the continual learning setting: we show that storing distilled images as episodic memory of previous tasks can alleviate forgetting more effectively than real images.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to distill a dataset into a small set of artifical images, which can be used to then train a neural network. The proposed method of doing this is based on a meta-learning flavored formulation where a GD step is done on the synthetic images and then the loss computed on the real images. The procedure appears to overfit to the initialization of the classification model and thus to resolve this the authors randomly resample the initialization throughout training. Experimental results compare the method to other coreset construction strategies and random sampling. Applications are also shown to continual learning. I think the general idea of this paper and the questions it poses are very interesting and in fact a rather underexplored area however the experimental results are quite underwhelming. The original questions posed in the paper are not really answered (can we distill a dataset?). The experiments only demonstrate that an expensive procedure that trains a model can produce synthetic images that give higher performance than a randomly selected set of images. Comments regarding experiments from Sec 4.1 The results indeed show the performance is greatly improved over using random images from the dataset, however obtaining 45% on CIFAR-10 would hardly be considered “distilling the dataset” as the full set of data yields performances of 90%+ on this task. I would be more interested to see how many samples are needed to distill the dataset such that it reaches reasonably performance. It is not completely clear why the authors didn’t do this kind of analysis but it might be due to the heavy computational cost of this method, in which case it would be good to discuss that. Similarly for MNIST the performance is quite low (effective use of regularization and can yield similar performance with a 100 images), kmeans++ based selection gives quite close results to the synthetic procedure for MNIST. I think its also a bit misleading that the authors advertise the fixed initialization performance in the abstract, this performance in my mind is mainly useful for the purposes of analyzing the method. I would also be interested to know if the coreset construction baselines can be improved. For example the kmeans++, why couldnt it be done in the feature space of a trained model (since synthetic method also trains a model). I am not an expert in the area of coresets but it would be good to know what is the state-of-the-art. A potentially interesting coreset consturction approach that also requires training a classifier can be found in this ICLR 2019 paper https://arxiv.org/pdf/1812.05159.pdf Comments regarding experiments for the continual learning application: I posted a separate comment regarding this set of experiments, which I hope the authors can address. Currently it is not 100% clear what is done. For now I will go under the assumptions that the authors are ingesting the entire dataset for each new task, using this to get their distilled data. This seems like an unfair baseline to me for the GEM setup which sees each data point only once and a for a single iteration. The authors introduce an additional step which requires multiple iterations over the whole dataset to obtain the synthetic set of images. This may be more appropriate for comparison in another continual learning setting. A few other observations/suggestions - In my understanding the label of the synthetic data is currently fixed. It would be interesting if this can also be adapted or alternatively if a fixed soft label (e.g 0.3 one class 0.7) can be assigned. - I would be interested to know if the target architecuture were different would the results change? --- Addendum: After writing this review the authors have responded to my concerns regarding the CL experiments and have begun to recompute some of the experiments using a more rigorous comparison with promising results on permuted mnist. If the CL section is clarified in the revised version (I suggest besides more decriptions in the main text to perhaps add an explicit algorithm block in the appendix) and the more rigorous comparison (single epoch to signl epoch) extend to cifar100 I will consider increasing my score as the method will be competitive for CL. I do still think the non-CL experiments (which make up the bulk of the paper are somewhat underwhelming as discussed in the review)", "role": "assistant" } ]
[ "This paper proposes to distill a dataset into a small set of artifical images, which can be used to then train a neural network.", "The proposed method of doing this is based on a meta-learning flavored formulation where a GD step is done on the synthetic images and then the loss computed on the real images.", "The procedure appears to overfit to the initialization of the classification model and thus to resolve this the authors randomly resample the initialization throughout training.", "Experimental results compare the method to other coreset construction strategies and random sampling.", "Applications are also shown to continual learning.", "I think the general idea of this paper and the questions it poses are very interesting and in fact a rather underexplored area however the experimental results are quite underwhelming.", "The original questions posed in the paper are not really answered (can we distill a dataset?).", "The experiments only demonstrate that an expensive procedure that trains a model can produce synthetic images that give higher performance than a randomly selected set of images.", "Comments regarding experiments from Sec 4.1 The results indeed show the performance is greatly improved over using random images from the dataset, however obtaining 45% on CIFAR-10 would hardly be considered “distilling the dataset” as the full set of data yields performances of 90%+ on this task.", "I would be more interested to see how many samples are needed to distill the dataset such that it reaches reasonably performance.", "It is not completely clear why the authors didn’t do this kind of analysis but it might be due to the heavy computational cost of this method, in which case it would be good to discuss that.", "Similarly for MNIST the performance is quite low (effective use of regularization and can yield similar performance with a 100 images), kmeans++ based selection gives quite close results to the synthetic procedure for MNIST.", "I think its also a bit misleading that the authors advertise the fixed initialization performance in the abstract, this performance in my mind is mainly useful for the purposes of analyzing the method.", "I would also be interested to know if the coreset construction baselines can be improved.", "For example the kmeans++, why couldnt it be done in the feature space of a trained model (since synthetic method also trains a model).", "I am not an expert in the area of coresets but it would be good to know what is the state-of-the-art.", "A potentially interesting coreset consturction approach that also requires training a classifier can be found in this ICLR 2019 paper https://arxiv.org/pdf/1812.05159.pdf Comments regarding experiments for the continual learning application: I posted a separate comment regarding this set of experiments, which I hope the authors can address.", "Currently it is not 100% clear what is done.", "For now I will go under the assumptions that the authors are ingesting the entire dataset for each new task, using this to get their distilled data.", "This seems like an unfair baseline to me for the GEM setup which sees each data point only once and a for a single iteration.", "The authors introduce an additional step which requires multiple iterations over the whole dataset to obtain the synthetic set of images.", "This may be more appropriate for comparison in another continual learning setting.", "A few other observations/suggestions - In my understanding the label of the synthetic data is currently fixed.", "It would be interesting if this can also be adapted or alternatively if a fixed soft label (e.g 0.3 one class 0.7) can be assigned.", "- I would be interested to know if the target architecuture were different would the results change?", "--- Addendum: After writing this review the authors have responded to my concerns regarding the CL experiments and have begun to recompute some of the experiments using a more rigorous comparison with promising results on permuted mnist.", "If the CL section is clarified in the revised version (I suggest besides more decriptions in the main text to perhaps add an explicit algorithm block in the appendix) and the more rigorous comparison (single epoch to signl epoch) extend to cifar100 I will consider increasing my score as the method will be competitive for CL.", "I do still think the non-CL experiments (which make up the bulk of the paper are somewhat underwhelming as discussed in the review)" ]
{ "criticism": 7, "example": 2, "importance_and_relevance": 2, "materials_and_methods": 22, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 10, "suggestion_and_solution": 11, "total": 28 }
0.25
0.071429
0.071429
0.785714
0.107143
0.107143
0.357143
0.392857
28
1
1
1
1
1
1
1
1
2.142857
-1.40819
3.551047
WEEGNET: an wavelet based Convnet for Brain-computer interfaces
Brain-computer interfaces (BCI) are systems that link the brain with machines using brainwaves as a medium of communication using electroencephalography to explore the brain activity which is an affordable solution, noninvasive, easy setup, and portability. However, the neural signals are noisy, non-stationary, and nonlinear where the processing of those signals in a pattern recognition problem needs a complex pipeline of preprocessing, feature extraction, and classification algorithms that need an apriori knowledge to avoid compatibility issues and a deep understanding of the studied signals. Moreover, some techniques need a huge computational power on the CPU and a huge size of RAM. Therefore, several papers proposed to use Deep Learning to get state of the art performance and visualization of the learned features to have more understanding about the neural signals. But, the convolutional neural network (Convnet) are not used properly and the results are often random when we reproduced the works. Hence, we propose a combination of the discrete wavelet transform (DWT) and a Convnet that processes raw EEG data. The DWT will be used to reduce the size of the data without losing useful information. Also, a modified version of EEGNET will be used to extract the features and classification.
The paper applies CNN architectures and wavelet decomposition of some moderate-dimensional EEG signals from a recent EEG-BCI competition with the aim of using representational compactness to increase classification efficiency while retaining computational efficiency. The motivation is good and the position in the (vast) EEG-BCI literature is appropriate. However, I see neither a convincing case for computational efficiency (it is still a net requiring large training as opposed to simpler frequency binning / generalized svd methods), nor a convincing case for a classification accuracy breakthrough (results are similar to many other known methods). Furthermore the pictures showing *activation* patterns are not necessarily revealing nor are they convincingly artifact - filtering. Overall the decision is to reject based on insufficient significance and impact of result - although the paper is otherwise fairly well written and organized.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWEEGNET: an wavelet based Convnet for Brain-computer interfaces\nBrain-computer interfaces (BCI) are systems that link the brain with machines using brainwaves as a medium of communication using electroencephalography to explore the brain activity which is an affordable solution, noninvasive, easy setup, and portability. However, the neural signals are noisy, non-stationary, and nonlinear where the processing of those signals in a pattern recognition problem needs a complex pipeline of preprocessing, feature extraction, and classification algorithms that need an apriori knowledge to avoid compatibility issues and a deep understanding of the studied signals. Moreover, some techniques need a huge computational power on the CPU and a huge size of RAM. Therefore, several papers proposed to use Deep Learning to get state of the art performance and visualization of the learned features to have more understanding about the neural signals. But, the convolutional neural network (Convnet) are not used properly and the results are often random when we reproduced the works. Hence, we propose a combination of the discrete wavelet transform (DWT) and a Convnet that processes raw EEG data. The DWT will be used to reduce the size of the data without losing useful information. Also, a modified version of EEGNET will be used to extract the features and classification.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper applies CNN architectures and wavelet decomposition of some moderate-dimensional EEG signals from a recent EEG-BCI competition with the aim of using representational compactness to increase classification efficiency while retaining computational efficiency. The motivation is good and the position in the (vast) EEG-BCI literature is appropriate. However, I see neither a convincing case for computational efficiency (it is still a net requiring large training as opposed to simpler frequency binning / generalized svd methods), nor a convincing case for a classification accuracy breakthrough (results are similar to many other known methods). Furthermore the pictures showing *activation* patterns are not necessarily revealing nor are they convincingly artifact - filtering. Overall the decision is to reject based on insufficient significance and impact of result - although the paper is otherwise fairly well written and organized.", "role": "assistant" } ]
[ "The paper applies CNN architectures and wavelet decomposition of some moderate-dimensional EEG signals from a recent EEG-BCI competition with the aim of using representational compactness to increase classification efficiency while retaining computational efficiency.", "The motivation is good and the position in the (vast) EEG-BCI literature is appropriate.", "However, I see neither a convincing case for computational efficiency (it is still a net requiring large training as opposed to simpler frequency binning / generalized svd methods), nor a convincing case for a classification accuracy breakthrough (results are similar to many other known methods).", "Furthermore the pictures showing *activation* patterns are not necessarily revealing nor are they convincingly artifact - filtering.", "Overall the decision is to reject based on insufficient significance and impact of result - although the paper is otherwise fairly well written and organized." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 3, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 5 }
0.4
0
0
0.6
0.4
0.2
0.4
0.2
5
1
0
0
1
1
1
1
1
2.2
1.189924
1.010076
WEEGNET: an wavelet based Convnet for Brain-computer interfaces
Brain-computer interfaces (BCI) are systems that link the brain with machines using brainwaves as a medium of communication using electroencephalography to explore the brain activity which is an affordable solution, noninvasive, easy setup, and portability. However, the neural signals are noisy, non-stationary, and nonlinear where the processing of those signals in a pattern recognition problem needs a complex pipeline of preprocessing, feature extraction, and classification algorithms that need an apriori knowledge to avoid compatibility issues and a deep understanding of the studied signals. Moreover, some techniques need a huge computational power on the CPU and a huge size of RAM. Therefore, several papers proposed to use Deep Learning to get state of the art performance and visualization of the learned features to have more understanding about the neural signals. But, the convolutional neural network (Convnet) are not used properly and the results are often random when we reproduced the works. Hence, we propose a combination of the discrete wavelet transform (DWT) and a Convnet that processes raw EEG data. The DWT will be used to reduce the size of the data without losing useful information. Also, a modified version of EEGNET will be used to extract the features and classification.
This paper investigates the use of a wavelet transform to provide the input to convnet in the context of BCI using EEG data. While the success of the approach is plausible (using time-frequency transforms as input to nets in audio is quite common), the paper has a number of issues. My first concern is that the paper is not well written (many typos, latex formatting style, sentences too long and hard to parse). This harms readability of the contribution. Now regarding the contribution: given the amount of work on this topic and the marginal difficulty of the contribution, the only way I see to convince reviewers is to provide a massive empirical evaluation that proves a clear win on the state-of-the-art. Unfortunately the paper reports results on only one dataset with no clear improvement. I would encourage the author(s) to get some help with writing and conduct a more thorough evaluation. See for example the community effort on MOAB (Mother of all benchmarks) for BCI (https://github.com/NeuroTechX/moabb) that aims to provide a clear and public baseline. Remarks on text: - *which is an affordable solution, noninvasive, easy setup, and portability* -> *which is an affordable solution, noninvasive, with easy setup, and portability* - *state of the art* -> *state-of-the-art* when used adjective - check difference between citet and citep then I stopped as there are too many.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWEEGNET: an wavelet based Convnet for Brain-computer interfaces\nBrain-computer interfaces (BCI) are systems that link the brain with machines using brainwaves as a medium of communication using electroencephalography to explore the brain activity which is an affordable solution, noninvasive, easy setup, and portability. However, the neural signals are noisy, non-stationary, and nonlinear where the processing of those signals in a pattern recognition problem needs a complex pipeline of preprocessing, feature extraction, and classification algorithms that need an apriori knowledge to avoid compatibility issues and a deep understanding of the studied signals. Moreover, some techniques need a huge computational power on the CPU and a huge size of RAM. Therefore, several papers proposed to use Deep Learning to get state of the art performance and visualization of the learned features to have more understanding about the neural signals. But, the convolutional neural network (Convnet) are not used properly and the results are often random when we reproduced the works. Hence, we propose a combination of the discrete wavelet transform (DWT) and a Convnet that processes raw EEG data. The DWT will be used to reduce the size of the data without losing useful information. Also, a modified version of EEGNET will be used to extract the features and classification.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper investigates the use of a wavelet transform to provide the input to convnet in the context of BCI using EEG data. While the success of the approach is plausible (using time-frequency transforms as input to nets in audio is quite common), the paper has a number of issues. My first concern is that the paper is not well written (many typos, latex formatting style, sentences too long and hard to parse). This harms readability of the contribution. Now regarding the contribution: given the amount of work on this topic and the marginal difficulty of the contribution, the only way I see to convince reviewers is to provide a massive empirical evaluation that proves a clear win on the state-of-the-art. Unfortunately the paper reports results on only one dataset with no clear improvement. I would encourage the author(s) to get some help with writing and conduct a more thorough evaluation. See for example the community effort on MOAB (Mother of all benchmarks) for BCI (https://github.com/NeuroTechX/moabb) that aims to provide a clear and public baseline. Remarks on text: - *which is an affordable solution, noninvasive, easy setup, and portability* -> *which is an affordable solution, noninvasive, with easy setup, and portability* - *state of the art* -> *state-of-the-art* when used adjective - check difference between citet and citep then I stopped as there are too many.", "role": "assistant" } ]
[ "This paper investigates the use of a wavelet transform to provide the input to convnet in the context of BCI using EEG data.", "While the success of the approach is plausible (using time-frequency transforms as input to nets in audio is quite common), the paper has a number of issues.", "My first concern is that the paper is not well written (many typos, latex formatting style, sentences too long and hard to parse).", "This harms readability of the contribution.", "Now regarding the contribution: given the amount of work on this topic and the marginal difficulty of the contribution, the only way I see to convince reviewers is to provide a massive empirical evaluation that proves a clear win on the state-of-the-art.", "Unfortunately the paper reports results on only one dataset with no clear improvement.", "I would encourage the author(s) to get some help with writing and conduct a more thorough evaluation.", "See for example the community effort on MOAB (Mother of all benchmarks) for BCI (https://github.com/NeuroTechX/moabb) that aims to provide a clear and public baseline.", "Remarks on text: - *which is an affordable solution, noninvasive, easy setup, and portability* -> *which is an affordable solution, noninvasive, with easy setup, and portability* - *state of the art* -> *state-of-the-art* when used adjective - check difference between citet and citep then I stopped as there are too many." ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 9 }
0.444444
0.111111
0.111111
0.777778
0.111111
0.333333
0.222222
0.111111
9
1
1
1
1
1
1
1
1
2.222222
1.969703
0.252519
Irrationality can help reward inference
Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior. The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to risk aversion and beyond. This fact seems like it will be strictly harmful to reward inference: it is already hard to infer the reward from rational behavior, and noise and systematic biases make actions have less direct of a relationship to the reward. Our insight in this work is that, contrary to expectations, irrationality can actually help rather than hinder reward inference. For some types and amounts of irrationality, the expert now produces more varied policies compared to rational behavior, which help disambiguate among different reward parameters -- those that otherwise correspond to the same rational behavior. We put this to the test in a systematic analysis of the effect of irrationality on reward inference. We start by covering the space of irrationalities as deviations from the Bellman update, simulate expert behavior, and measure the accuracy of inference to contrast the different types and study the gains and losses. We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accurately model irrationality, as well as to what extent we might expect (or be able to train) real people to exhibit helpful irrationalities when teaching rewards to learners.
*First of all, TBH, I*m not very familiar with the field of reward inference / inverse reinforcement learning, my assessment to the paper is only based on educational guess.* The paper proposes a surprising insight that reasoning about irrationality of human experts could help better reward inference, which is trying to get the reward function from human expert demonstrations. The authors start from Bellman update and try to add different modifications to the Bellman update to see how different types and levels of rationality affect the reward inference. The proposed modifications include adding Boltzmann sampling instead of taking a max, tuning different levels of optimism and pessimism, transforming the reward in specific way, change the discount into myopic discount or into hyperbolic discounting, and so on. The authors conduct an experiment on a small 5x5 gridworld and find that most changes to introduce irrationality into the Bellman update have led to lower log loss than the rational expert, which demonstrates that the proposed irrationality methods are effective on reward inference. In Figure 4 and Figure 5, the authors show a few specific examples on how different types of irrationality and different levels of them could affect the policy*s behavior. The major limitation of the paper (the authors also mention it in Page 8) is that the proposed method is quite hard to generalize into many environments. The usefulness of the irrationality depends a lot on the environments and the authors may have to conduct more extensive experiments to make their claims solid.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIrrationality can help reward inference\nSpecifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior. The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to risk aversion and beyond. This fact seems like it will be strictly harmful to reward inference: it is already hard to infer the reward from rational behavior, and noise and systematic biases make actions have less direct of a relationship to the reward. Our insight in this work is that, contrary to expectations, irrationality can actually help rather than hinder reward inference. For some types and amounts of irrationality, the expert now produces more varied policies compared to rational behavior, which help disambiguate among different reward parameters -- those that otherwise correspond to the same rational behavior. We put this to the test in a systematic analysis of the effect of irrationality on reward inference. We start by covering the space of irrationalities as deviations from the Bellman update, simulate expert behavior, and measure the accuracy of inference to contrast the different types and study the gains and losses. We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accurately model irrationality, as well as to what extent we might expect (or be able to train) real people to exhibit helpful irrationalities when teaching rewards to learners.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n*First of all, TBH, I*m not very familiar with the field of reward inference / inverse reinforcement learning, my assessment to the paper is only based on educational guess.* The paper proposes a surprising insight that reasoning about irrationality of human experts could help better reward inference, which is trying to get the reward function from human expert demonstrations. The authors start from Bellman update and try to add different modifications to the Bellman update to see how different types and levels of rationality affect the reward inference. The proposed modifications include adding Boltzmann sampling instead of taking a max, tuning different levels of optimism and pessimism, transforming the reward in specific way, change the discount into myopic discount or into hyperbolic discounting, and so on. The authors conduct an experiment on a small 5x5 gridworld and find that most changes to introduce irrationality into the Bellman update have led to lower log loss than the rational expert, which demonstrates that the proposed irrationality methods are effective on reward inference. In Figure 4 and Figure 5, the authors show a few specific examples on how different types of irrationality and different levels of them could affect the policy*s behavior. The major limitation of the paper (the authors also mention it in Page 8) is that the proposed method is quite hard to generalize into many environments. The usefulness of the irrationality depends a lot on the environments and the authors may have to conduct more extensive experiments to make their claims solid.", "role": "assistant" } ]
[ "*First of all, TBH, I*m not very familiar with the field of reward inference / inverse reinforcement learning, my assessment to the paper is only based on educational guess.", "* The paper proposes a surprising insight that reasoning about irrationality of human experts could help better reward inference, which is trying to get the reward function from human expert demonstrations.", "The authors start from Bellman update and try to add different modifications to the Bellman update to see how different types and levels of rationality affect the reward inference.", "The proposed modifications include adding Boltzmann sampling instead of taking a max, tuning different levels of optimism and pessimism, transforming the reward in specific way, change the discount into myopic discount or into hyperbolic discounting, and so on.", "The authors conduct an experiment on a small 5x5 gridworld and find that most changes to introduce irrationality into the Bellman update have led to lower log loss than the rational expert, which demonstrates that the proposed irrationality methods are effective on reward inference.", "In Figure 4 and Figure 5, the authors show a few specific examples on how different types of irrationality and different levels of them could affect the policy*s behavior.", "The major limitation of the paper (the authors also mention it in Page 8) is that the proposed method is quite hard to generalize into many environments.", "The usefulness of the irrationality depends a lot on the environments and the authors may have to conduct more extensive experiments to make their claims solid." ]
{ "criticism": 2, "example": 2, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 8 }
0.25
0.25
0.25
0.625
0.125
0
0.375
0.5
8
1
1
1
1
1
0
1
1
2.375
1.980439
0.394561
When Do Variational Autoencoders Know What They Don*t Know?
Recently, the ability of deep generative models to detect outliers has been called into question because of the demonstration that they frequently assign higher probability density to samples from completely different data sets than were used for training. For example, a model trained on CIFAR-10 may counter-intuitively attribute higher likelihood to samples obtained from SVHN. In this work, we closely examine this phenomena in the specific context of variational autoencoders, a commonly-used approach for anomaly detection. In particular, we demonstrate that VAEs, when appropriately designed and trained, are in fact often proficient in differentiating inlier and outlier distributions, e.g., FashionMNIST vs MNIST, CIFAR-10 vs SVHN and CelebA. We describe various mechanisms that mitigate this capability, including the paradoxical necessity of large or unbounded gradients, which have sometimes been observed to occur during training of VAE models.
The paper studies the problem of detecting outliers using generative models (specifically VAEs) in the detail and describes the condition in which VAEs can be used for outlier detection. The paper is well-written and has a very detailed description of its claims. I found the paper insightful and clear overall, and I believe we need more papers like this in the area of deep generative models. I have some questions and comments which I think addressing them can increase the impact of the paper: - The interaction between the encoder/decoder seems to be important based on the latest papers in this area and just assuming “that the encoder network maintains sufficient complexity to produce a reasonably tight variational bound given the decoder” may not be enough for making conclusions about VAEs (see for instance “Tighter Variational Bounds are Not Necessarily Better” by Rainforth et al. 2019). Are there any experiments that you can add to show the robustness of your results with respect to the encoder network. - It was mentioned that for choosing the right capacity “it can be increased until there is a significant gap between train and test NLL values on the inlier data”; however, this was not shown in the experiments. Can you add the columns in Table 1 and Table 2 that after some point there will be a gap between the train and test NLL? - The current version of the paper seems to be ended abruptly and without any summarization of the contributions, it would be nice to have a discussion/conclusion section which discusses the main assumptions and limitations of the arguments in the paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWhen Do Variational Autoencoders Know What They Don*t Know?\nRecently, the ability of deep generative models to detect outliers has been called into question because of the demonstration that they frequently assign higher probability density to samples from completely different data sets than were used for training. For example, a model trained on CIFAR-10 may counter-intuitively attribute higher likelihood to samples obtained from SVHN. In this work, we closely examine this phenomena in the specific context of variational autoencoders, a commonly-used approach for anomaly detection. In particular, we demonstrate that VAEs, when appropriately designed and trained, are in fact often proficient in differentiating inlier and outlier distributions, e.g., FashionMNIST vs MNIST, CIFAR-10 vs SVHN and CelebA. We describe various mechanisms that mitigate this capability, including the paradoxical necessity of large or unbounded gradients, which have sometimes been observed to occur during training of VAE models.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper studies the problem of detecting outliers using generative models (specifically VAEs) in the detail and describes the condition in which VAEs can be used for outlier detection. The paper is well-written and has a very detailed description of its claims. I found the paper insightful and clear overall, and I believe we need more papers like this in the area of deep generative models. I have some questions and comments which I think addressing them can increase the impact of the paper: - The interaction between the encoder/decoder seems to be important based on the latest papers in this area and just assuming “that the encoder network maintains sufficient complexity to produce a reasonably tight variational bound given the decoder” may not be enough for making conclusions about VAEs (see for instance “Tighter Variational Bounds are Not Necessarily Better” by Rainforth et al. 2019). Are there any experiments that you can add to show the robustness of your results with respect to the encoder network. - It was mentioned that for choosing the right capacity “it can be increased until there is a significant gap between train and test NLL values on the inlier data”; however, this was not shown in the experiments. Can you add the columns in Table 1 and Table 2 that after some point there will be a gap between the train and test NLL? - The current version of the paper seems to be ended abruptly and without any summarization of the contributions, it would be nice to have a discussion/conclusion section which discusses the main assumptions and limitations of the arguments in the paper.", "role": "assistant" } ]
[ "The paper studies the problem of detecting outliers using generative models (specifically VAEs) in the detail and describes the condition in which VAEs can be used for outlier detection.", "The paper is well-written and has a very detailed description of its claims.", "I found the paper insightful and clear overall, and I believe we need more papers like this in the area of deep generative models.", "I have some questions and comments which I think addressing them can increase the impact of the paper: - The interaction between the encoder/decoder seems to be important based on the latest papers in this area and just assuming “that the encoder network maintains sufficient complexity to produce a reasonably tight variational bound given the decoder” may not be enough for making conclusions about VAEs (see for instance “Tighter Variational Bounds are Not Necessarily Better” by Rainforth et al.", "2019).", "Are there any experiments that you can add to show the robustness of your results with respect to the encoder network.", "- It was mentioned that for choosing the right capacity “it can be increased until there is a significant gap between train and test NLL values on the inlier data”; however, this was not shown in the experiments.", "Can you add the columns in Table 1 and Table 2 that after some point there will be a gap between the train and test NLL?", "- The current version of the paper seems to be ended abruptly and without any summarization of the contributions, it would be nice to have a discussion/conclusion section which discusses the main assumptions and limitations of the arguments in the paper." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 4, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 9 }
0.222222
0.111111
0.111111
0.444444
0.222222
0.333333
0.333333
0.444444
9
1
1
1
1
1
1
1
1
2.222222
1.969703
0.252519
ProxNet: End-to-End Learning of Structured Representation by Proximal Mapping
Underpinning the success of deep learning is the effective regularization that allows a broad range of structures in data to be compactly modeled in a deep architecture. Examples include transformation invariances, robustness to adversarial/random perturbations, and correlations between multiple modalities. However, most existing methods incorporate such priors either by auto-encoders, whose result is used to initialize supervised learning, or by augmenting the data with exemplifications of the transformations which, despite the improved performance of supervised learning, leaves it unclear whether the learned latent representation does encode the desired regularities. To address these issues, this work proposes an emph{end-to-end} representation learning framework that allows prior structures to be encoded emph{explicitly} in the hidden layers, and to be trained efficiently in conjunction with the supervised target. Our approach is based on proximal mapping in a reproducing kernel Hilbert space, and leverages differentiable optimization. The resulting technique is applied to generalize dropout and invariant kernel warping, and to develop novel algorithms for multiview modeling and robust temporal learning.
The authors of this paper introduce the concept of proximal mapping from sparse representation to supervised and unsupervised structures, aiming at learning structured representations. It generalize dropout and invariant kernel warping, and applied in multiview learning and sequence modelling. Generally speaking, without sufficient background knowledge, it is difficult for a reader to understand the major contributions of this paper, due to unexplained terms and notations. For example in the second paragraph of Section 2, I think x in activation functions should correspond to z in Equation (1) in some places. Thus, the authors are suggested to improve writing to clearly show the idea without too much mingling with specific techniques. Visual examples, such as Figure 2, do help. Moreover, it is unclear how structured representation is defined, and how structured representation is learned by prox-map. Finally, experimental results do support that the introducing prox-map in various neural networks improves performance.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nProxNet: End-to-End Learning of Structured Representation by Proximal Mapping\nUnderpinning the success of deep learning is the effective regularization that allows a broad range of structures in data to be compactly modeled in a deep architecture. Examples include transformation invariances, robustness to adversarial/random perturbations, and correlations between multiple modalities. However, most existing methods incorporate such priors either by auto-encoders, whose result is used to initialize supervised learning, or by augmenting the data with exemplifications of the transformations which, despite the improved performance of supervised learning, leaves it unclear whether the learned latent representation does encode the desired regularities. To address these issues, this work proposes an emph{end-to-end} representation learning framework that allows prior structures to be encoded emph{explicitly} in the hidden layers, and to be trained efficiently in conjunction with the supervised target. Our approach is based on proximal mapping in a reproducing kernel Hilbert space, and leverages differentiable optimization. The resulting technique is applied to generalize dropout and invariant kernel warping, and to develop novel algorithms for multiview modeling and robust temporal learning.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors of this paper introduce the concept of proximal mapping from sparse representation to supervised and unsupervised structures, aiming at learning structured representations. It generalize dropout and invariant kernel warping, and applied in multiview learning and sequence modelling. Generally speaking, without sufficient background knowledge, it is difficult for a reader to understand the major contributions of this paper, due to unexplained terms and notations. For example in the second paragraph of Section 2, I think x in activation functions should correspond to z in Equation (1) in some places. Thus, the authors are suggested to improve writing to clearly show the idea without too much mingling with specific techniques. Visual examples, such as Figure 2, do help. Moreover, it is unclear how structured representation is defined, and how structured representation is learned by prox-map. Finally, experimental results do support that the introducing prox-map in various neural networks improves performance.", "role": "assistant" } ]
[ "The authors of this paper introduce the concept of proximal mapping from sparse representation to supervised and unsupervised structures, aiming at learning structured representations.", "It generalize dropout and invariant kernel warping, and applied in multiview learning and sequence modelling.", "Generally speaking, without sufficient background knowledge, it is difficult for a reader to understand the major contributions of this paper, due to unexplained terms and notations.", "For example in the second paragraph of Section 2, I think x in activation functions should correspond to z in Equation (1) in some places.", "Thus, the authors are suggested to improve writing to clearly show the idea without too much mingling with specific techniques.", "Visual examples, such as Figure 2, do help.", "Moreover, it is unclear how structured representation is defined, and how structured representation is learned by prox-map.", "Finally, experimental results do support that the introducing prox-map in various neural networks improves performance." ]
{ "criticism": 2, "example": 2, "importance_and_relevance": 1, "materials_and_methods": 5, "praise": 0, "presentation_and_reporting": 4, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 8 }
0.25
0.25
0.125
0.625
0
0.5
0.125
0.25
8
1
1
1
1
0
1
1
1
2.125
1.730439
0.394561
Target-directed Atomic Importance Estimation via Reverse Self-attention
Estimating the importance of each atom in a molecule is one of the most appealing and challenging problems in chemistry, physics, and material engineering. The most common way to estimate the atomic importance is to compute the electronic structure using density-functional theory (DFT), and then to interpret it using domain knowledge of human experts. However, this conventional approach is impractical to the large molecular database because DFT calculation requires huge computation, specifically, O(n^4) time complexity w.r.t. the number of electrons in a molecule. Furthermore, the calculation results should be interpreted by the human experts to estimate the atomic importance in terms of the target molecular property. To tackle this problem, we first exploit machine learning-based approach for the atomic importance estimation. To this end, we propose reverse self-attention on graph neural networks and integrate it with graph-based molecular description. Our method provides an efficiently-automated and target-directed way to estimate the atomic importance without any domain knowledge on chemistry and physics.
This paper proposes a reverse self-attention on graph neural networks and integrate it with graph-based molecular description. It provides an efficient-automated and target-directed way to estimate the atomic importance without any domain knowledge on chemistry and physics. In particular, it trains graph attention network to predict the molecular property using the molecular network and calculate the atomic importance scores based on the sum of the incoming attention weights. Pros: This paper presents an interesting application of the graph attention network to other scientific areas. And the developed reverse self-attention score, although simple, is useful in estimating the importance of atoms in a molecular network. Cons: However, the paper also needs some improvement in its evaluation (see detailed comments below). Detailed comments: • In addition to the evaluation metric (7), it is also important to hire human experts to evaluate the performance. For example, just sample some test samples and annotate them by human experts so that they can be compared with the predicted results. • In order to further justify the metric (7), it is also important to report how consistent (7) is with the human evaluation. • The method is not compared with any of the previous methods/baselines. At least, it should evaluate how accurate of the proposed automatic ML-based method is when compared to previous (more expensive) human-based method. • It would be better to mention earlier what are the molecular property to be predicted by GAT during training. Just a few simple examples would be sufficient.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTarget-directed Atomic Importance Estimation via Reverse Self-attention\nEstimating the importance of each atom in a molecule is one of the most appealing and challenging problems in chemistry, physics, and material engineering. The most common way to estimate the atomic importance is to compute the electronic structure using density-functional theory (DFT), and then to interpret it using domain knowledge of human experts. However, this conventional approach is impractical to the large molecular database because DFT calculation requires huge computation, specifically, O(n^4) time complexity w.r.t. the number of electrons in a molecule. Furthermore, the calculation results should be interpreted by the human experts to estimate the atomic importance in terms of the target molecular property. To tackle this problem, we first exploit machine learning-based approach for the atomic importance estimation. To this end, we propose reverse self-attention on graph neural networks and integrate it with graph-based molecular description. Our method provides an efficiently-automated and target-directed way to estimate the atomic importance without any domain knowledge on chemistry and physics.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a reverse self-attention on graph neural networks and integrate it with graph-based molecular description. It provides an efficient-automated and target-directed way to estimate the atomic importance without any domain knowledge on chemistry and physics. In particular, it trains graph attention network to predict the molecular property using the molecular network and calculate the atomic importance scores based on the sum of the incoming attention weights. Pros: This paper presents an interesting application of the graph attention network to other scientific areas. And the developed reverse self-attention score, although simple, is useful in estimating the importance of atoms in a molecular network. Cons: However, the paper also needs some improvement in its evaluation (see detailed comments below). Detailed comments: • In addition to the evaluation metric (7), it is also important to hire human experts to evaluate the performance. For example, just sample some test samples and annotate them by human experts so that they can be compared with the predicted results. • In order to further justify the metric (7), it is also important to report how consistent (7) is with the human evaluation. • The method is not compared with any of the previous methods/baselines. At least, it should evaluate how accurate of the proposed automatic ML-based method is when compared to previous (more expensive) human-based method. • It would be better to mention earlier what are the molecular property to be predicted by GAT during training. Just a few simple examples would be sufficient.", "role": "assistant" } ]
[ "This paper proposes a reverse self-attention on graph neural networks and integrate it with graph-based molecular description.", "It provides an efficient-automated and target-directed way to estimate the atomic importance without any domain knowledge on chemistry and physics.", "In particular, it trains graph attention network to predict the molecular property using the molecular network and calculate the atomic importance scores based on the sum of the incoming attention weights.", "Pros: This paper presents an interesting application of the graph attention network to other scientific areas.", "And the developed reverse self-attention score, although simple, is useful in estimating the importance of atoms in a molecular network.", "Cons: However, the paper also needs some improvement in its evaluation (see detailed comments below).", "Detailed comments: • In addition to the evaluation metric (7), it is also important to hire human experts to evaluate the performance.", "For example, just sample some test samples and annotate them by human experts so that they can be compared with the predicted results.", "• In order to further justify the metric (7), it is also important to report how consistent (7) is with the human evaluation.", "• The method is not compared with any of the previous methods/baselines.", "At least, it should evaluate how accurate of the proposed automatic ML-based method is when compared to previous (more expensive) human-based method.", "• It would be better to mention earlier what are the molecular property to be predicted by GAT during training.", "Just a few simple examples would be sufficient." ]
{ "criticism": 0, "example": 1, "importance_and_relevance": 3, "materials_and_methods": 10, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 7, "total": 13 }
0
0.076923
0.230769
0.769231
0.230769
0.230769
0.076923
0.538462
13
0
1
1
1
1
1
1
1
2.153846
2.153846
0
Cost-Effective Interactive Neural Attention Learning
We propose a novel interactive attention learning framework which we refer to as Interactive Attention Learning (IAL), in which the human annotators interactively manipulate the allocated attentions to correct the model’s behavior, by updating the attention-generating model without having to retrain the network. For efficient update of the attention generator without retraining, we propose a novel attention mechanism Neural Attention Process (NAP), which can generate stochastic attentions based on scarce training instances, and can incorporate new training instances without retraining. Further, to minimize human interaction cost, we propose a cost-effective algorithm that selects the most negative training instances that yield incorrect and non-intuitive interpretation with influence function and re-rank the attentions on the input features by their uncertainties, such that the annotators label the instances and attentions that are more influential to the prediction first. We validate IAL on multiple datasets from the healthcare domain, on which it not only significantly outperforms other baselines, but also achieves explicitly trained attention for interpretability that agrees well with human interpretation without expensive human labeling procedure.
The authors introduce an interactive attention learning framework called Neural Attention Process (NAP) which produces weight predictions over the input variables in addition to the usual target predictions. The motivation behind this type of model is to obtain an interpretable explanation of the model’s output which can be shaped by a small set of human expert annotations. At a higher level NAP seems to be an extension of a previously introduced model called RETAIN with an additional component inspired by Neural Processes (NPs) which incorporates the annotations of the experts. The authors test their model on five different tasks from 3 datasets (health records, fitness and real estate) and compare to a number of related baselines. While I think the idea is interesting, in my opinion this paper is not ready for publication and I would argue for it to be rejected, for the following reasons: * Presentation: The paper is over badly written, sloppy and not well structured. * Badly written: there are lots of grammatical errors in the paper to the point that it severely affects its clarity (for example, the subject of a sentence often changes from plural to singular or vice versa within the same sentence and it’s unclear what the authors are referring to). * Sloppy: many variables are not defined in the paper, and some are called different names depending on the figure or part of the text (i.e. the use of ‘s’ or ‘t’ interchangeably to refer to the iterations). At some point even the name of the model (referred to as NAP for the majority of the paper) is just referred to as NP-Selective in the text and some figures (compare nomenclature in table 1 vs figure 5). This lack of attention to detail (although the name of the main model should hardly count as a detail) makes me worry about the scientific rigorousness of the work. * Not well structured: The paper is not very easy to follow as there are jumps between different models (RETAIN, NPs, NAPs) which are not always evident. On that note, I would have liked to see an objective function to know what exactly is being used to optimise theta. * Significance: Overall the results don’t seem incredibly significant (e.g. figure 6). If one of the main claims is that the number or annotated examples is smaller with NAP the authors should provide experimental evidence that this is the case (as this would be one of the most significant benefits of the model). * Motivation: a key motivation for this paper, as stated by the authors, is the desire to obtain interpretable representations, which in itself is an important research problem. However, I am not sure that the suggested solution is addressing this issue. If I understand correctly, what the model learns is, in essence, a weight mask over the inputs that determines how much each of the dimensions of the input contribute to the output. This could be achieved with any attention model in and is in itself not novel. I think a more interesting motivation for this model might be its ability to incorporate some human annotations that teach the model where to attend. However if this was the main story I think the authors should focus on making this a stronger argument and proving that narrowing down the attention with said annotations actually leads to better/faster/more robust learning, rather than focussing on the interpretability narrative. Some more specific comments and recommendations for the authors (I think there are some positive aspects of this paper and I hope that I can give a few points of advice to help improve its quality.) I will go through them in the order of the paper: 1) The name Neural attention process might be a bit confusing given that there are already Attentive Neural Processes. I would strongly recommend changing the name. 2) Check the language and make sure it’s correct, the introduction is particularly full of errors and you don’t want to lose the reader from the beginning. 3) If you are talking about interpretability you would have to mention as well all the work on interpretable representations (e.g. disentangled representations etc) and if you talk about attention, it is probably worth mentioning transformers. 4) Algorithm 1: this is not very clear. I would define the variables at the top that are not self-explanatory (such as S). Also, from the look of the algorithm theta is not updated in the else loop. 5) The change in notation is very confusing. It’s not always clear when you use ‘v’ vs ‘l’ or ‘s’ vs ‘t’. Sometimes it seems it depends on which of the different models you are talking about, but either way you should try to make it all uniform. 6) I would strongly recommend to specify the loss function and the regularisation term as this will make the task much clearer to the reader. 7) I would make the difference to RETAIN very clear and argue why you think your addition improves on the original (from the text at the moment it’s not even clear when you are describing RETAIN and when you talk about your model, maybe making a subsection would help?) 8) Figure 3 is too small. 9) You claim to have 5 datasets, but the way I would interpret your setup, you have 5 tasks from 3 datasets. 10) I would say a sentence or two about why you chose the baselines you chose and why you are hoping that they prove your different points. 11) Figure 6 has no axis labels on the plot itself, which are definitely necessary. I also recommend not using red and green as the two colours as it will be hard for colour blind to distinguish between those two.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCost-Effective Interactive Neural Attention Learning\nWe propose a novel interactive attention learning framework which we refer to as Interactive Attention Learning (IAL), in which the human annotators interactively manipulate the allocated attentions to correct the model’s behavior, by updating the attention-generating model without having to retrain the network. For efficient update of the attention generator without retraining, we propose a novel attention mechanism Neural Attention Process (NAP), which can generate stochastic attentions based on scarce training instances, and can incorporate new training instances without retraining. Further, to minimize human interaction cost, we propose a cost-effective algorithm that selects the most negative training instances that yield incorrect and non-intuitive interpretation with influence function and re-rank the attentions on the input features by their uncertainties, such that the annotators label the instances and attentions that are more influential to the prediction first. We validate IAL on multiple datasets from the healthcare domain, on which it not only significantly outperforms other baselines, but also achieves explicitly trained attention for interpretability that agrees well with human interpretation without expensive human labeling procedure.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors introduce an interactive attention learning framework called Neural Attention Process (NAP) which produces weight predictions over the input variables in addition to the usual target predictions. The motivation behind this type of model is to obtain an interpretable explanation of the model’s output which can be shaped by a small set of human expert annotations. At a higher level NAP seems to be an extension of a previously introduced model called RETAIN with an additional component inspired by Neural Processes (NPs) which incorporates the annotations of the experts. The authors test their model on five different tasks from 3 datasets (health records, fitness and real estate) and compare to a number of related baselines. While I think the idea is interesting, in my opinion this paper is not ready for publication and I would argue for it to be rejected, for the following reasons: * Presentation: The paper is over badly written, sloppy and not well structured. * Badly written: there are lots of grammatical errors in the paper to the point that it severely affects its clarity (for example, the subject of a sentence often changes from plural to singular or vice versa within the same sentence and it’s unclear what the authors are referring to). * Sloppy: many variables are not defined in the paper, and some are called different names depending on the figure or part of the text (i.e. the use of ‘s’ or ‘t’ interchangeably to refer to the iterations). At some point even the name of the model (referred to as NAP for the majority of the paper) is just referred to as NP-Selective in the text and some figures (compare nomenclature in table 1 vs figure 5). This lack of attention to detail (although the name of the main model should hardly count as a detail) makes me worry about the scientific rigorousness of the work. * Not well structured: The paper is not very easy to follow as there are jumps between different models (RETAIN, NPs, NAPs) which are not always evident. On that note, I would have liked to see an objective function to know what exactly is being used to optimise theta. * Significance: Overall the results don’t seem incredibly significant (e.g. figure 6). If one of the main claims is that the number or annotated examples is smaller with NAP the authors should provide experimental evidence that this is the case (as this would be one of the most significant benefits of the model). * Motivation: a key motivation for this paper, as stated by the authors, is the desire to obtain interpretable representations, which in itself is an important research problem. However, I am not sure that the suggested solution is addressing this issue. If I understand correctly, what the model learns is, in essence, a weight mask over the inputs that determines how much each of the dimensions of the input contribute to the output. This could be achieved with any attention model in and is in itself not novel. I think a more interesting motivation for this model might be its ability to incorporate some human annotations that teach the model where to attend. However if this was the main story I think the authors should focus on making this a stronger argument and proving that narrowing down the attention with said annotations actually leads to better/faster/more robust learning, rather than focussing on the interpretability narrative. Some more specific comments and recommendations for the authors (I think there are some positive aspects of this paper and I hope that I can give a few points of advice to help improve its quality.) I will go through them in the order of the paper: 1) The name Neural attention process might be a bit confusing given that there are already Attentive Neural Processes. I would strongly recommend changing the name. 2) Check the language and make sure it’s correct, the introduction is particularly full of errors and you don’t want to lose the reader from the beginning. 3) If you are talking about interpretability you would have to mention as well all the work on interpretable representations (e.g. disentangled representations etc) and if you talk about attention, it is probably worth mentioning transformers. 4) Algorithm 1: this is not very clear. I would define the variables at the top that are not self-explanatory (such as S). Also, from the look of the algorithm theta is not updated in the else loop. 5) The change in notation is very confusing. It’s not always clear when you use ‘v’ vs ‘l’ or ‘s’ vs ‘t’. Sometimes it seems it depends on which of the different models you are talking about, but either way you should try to make it all uniform. 6) I would strongly recommend to specify the loss function and the regularisation term as this will make the task much clearer to the reader. 7) I would make the difference to RETAIN very clear and argue why you think your addition improves on the original (from the text at the moment it’s not even clear when you are describing RETAIN and when you talk about your model, maybe making a subsection would help?) 8) Figure 3 is too small. 9) You claim to have 5 datasets, but the way I would interpret your setup, you have 5 tasks from 3 datasets. 10) I would say a sentence or two about why you chose the baselines you chose and why you are hoping that they prove your different points. 11) Figure 6 has no axis labels on the plot itself, which are definitely necessary. I also recommend not using red and green as the two colours as it will be hard for colour blind to distinguish between those two.", "role": "assistant" } ]
[ "The authors introduce an interactive attention learning framework called Neural Attention Process (NAP) which produces weight predictions over the input variables in addition to the usual target predictions.", "The motivation behind this type of model is to obtain an interpretable explanation of the model’s output which can be shaped by a small set of human expert annotations.", "At a higher level NAP seems to be an extension of a previously introduced model called RETAIN with an additional component inspired by Neural Processes (NPs) which incorporates the annotations of the experts.", "The authors test their model on five different tasks from 3 datasets (health records, fitness and real estate) and compare to a number of related baselines.", "While I think the idea is interesting, in my opinion this paper is not ready for publication and I would argue for it to be rejected, for the following reasons: * Presentation: The paper is over badly written, sloppy and not well structured.", "* Badly written: there are lots of grammatical errors in the paper to the point that it severely affects its clarity (for example, the subject of a sentence often changes from plural to singular or vice versa within the same sentence and it’s unclear what the authors are referring to).", "* Sloppy: many variables are not defined in the paper, and some are called different names depending on the figure or part of the text (i.e.", "the use of ‘s’ or ‘t’ interchangeably to refer to the iterations).", "At some point even the name of the model (referred to as NAP for the majority of the paper) is just referred to as NP-Selective in the text and some figures (compare nomenclature in table 1 vs figure 5).", "This lack of attention to detail (although the name of the main model should hardly count as a detail) makes me worry about the scientific rigorousness of the work.", "* Not well structured: The paper is not very easy to follow as there are jumps between different models (RETAIN, NPs, NAPs) which are not always evident.", "On that note, I would have liked to see an objective function to know what exactly is being used to optimise theta.", "* Significance: Overall the results don’t seem incredibly significant (e.g.", "figure 6).", "If one of the main claims is that the number or annotated examples is smaller with NAP the authors should provide experimental evidence that this is the case (as this would be one of the most significant benefits of the model).", "* Motivation: a key motivation for this paper, as stated by the authors, is the desire to obtain interpretable representations, which in itself is an important research problem.", "However, I am not sure that the suggested solution is addressing this issue.", "If I understand correctly, what the model learns is, in essence, a weight mask over the inputs that determines how much each of the dimensions of the input contribute to the output.", "This could be achieved with any attention model in and is in itself not novel.", "I think a more interesting motivation for this model might be its ability to incorporate some human annotations that teach the model where to attend.", "However if this was the main story I think the authors should focus on making this a stronger argument and proving that narrowing down the attention with said annotations actually leads to better/faster/more robust learning, rather than focussing on the interpretability narrative.", "Some more specific comments and recommendations for the authors (I think there are some positive aspects of this paper and I hope that I can give a few points of advice to help improve its quality.)", "I will go through them in the order of the paper: 1) The name Neural attention process might be a bit confusing given that there are already Attentive Neural Processes.", "I would strongly recommend changing the name.", "2) Check the language and make sure it’s correct, the introduction is particularly full of errors and you don’t want to lose the reader from the beginning.", "3) If you are talking about interpretability you would have to mention as well all the work on interpretable representations (e.g.", "disentangled representations etc) and if you talk about attention, it is probably worth mentioning transformers.", "4) Algorithm 1: this is not very clear.", "I would define the variables at the top that are not self-explanatory (such as S).", "Also, from the look of the algorithm theta is not updated in the else loop.", "5) The change in notation is very confusing.", "It’s not always clear when you use ‘v’ vs ‘l’ or ‘s’ vs ‘t’.", "Sometimes it seems it depends on which of the different models you are talking about, but either way you should try to make it all uniform.", "6) I would strongly recommend to specify the loss function and the regularisation term as this will make the task much clearer to the reader.", "7) I would make the difference to RETAIN very clear and argue why you think your addition improves on the original (from the text at the moment it’s not even clear when you are describing RETAIN and when you talk about your model, maybe making a subsection would help?)", "8) Figure 3 is too small.", "9) You claim to have 5 datasets, but the way I would interpret your setup, you have 5 tasks from 3 datasets.", "10) I would say a sentence or two about why you chose the baselines you chose and why you are hoping that they prove your different points.", "11) Figure 6 has no axis labels on the plot itself, which are definitely necessary.", "I also recommend not using red and green as the two colours as it will be hard for colour blind to distinguish between those two." ]
{ "criticism": 14, "example": 5, "importance_and_relevance": 6, "materials_and_methods": 19, "praise": 3, "presentation_and_reporting": 21, "results_and_discussion": 5, "suggestion_and_solution": 17, "total": 40 }
0.35
0.125
0.15
0.475
0.075
0.525
0.125
0.425
40
1
1
1
1
1
1
1
1
2.25
-9.255391
11.505391
Neural Reverse Engineering of Stripped Binaries
We address the problem of reverse engineering of stripped executables which contain no debug information. This is a challenging problem because of the low amount of syntactic information available in stripped executables, and due to the diverse assembly code patterns arising from compiler optimizations. We present a novel approach for predicting procedure names in stripped executables. Our approach combines static analysis with encoder-decoder-based models. The main idea is to use static analysis to obtain enriched representations of API call sites; encode a set of sequences of these call sites by traversing the Control-Flow Graph; and finally, attend to the encoded sequences while decoding the target name. Our evaluation shows that our model performs predictions that are difficult and time consuming for humans, while improving on the state-of-the-art by 20%.
= Summary Nero, a method for naming procedures when decompiling binary executables is presented. It consists of a static analysis-based approach to extract a set of sequences of calls to (named) library methods present in the procedure under analysis, and a (mostly standard) seq2seq architecture to predict the procedure name as a sequence of subtokens. Experiments on real-world data show that the method performs well and outperforms some baselines. = Strong/Weak Points + The method seems to work in practice + The writing is reasonably clear, and it feels sufficiently precise to reproduce the results (given enough time) - Little novelty on modeling side (straightforward extension of seq2seq) and data extraction (straightforward application of control flow graphs) - The central novelty in the analysis (extracting *kinds*) seems to contribute very little to results (cf. Tab. 2) - Bad fit for ICLR, as the paper is written with a focus on the PL analysis side = Recommendation Overall, I believe that this is an application with little novelty, and few lessons to be learned for the ICLR audience - a PL or Software Engineering conference (such as VMCAI or FSE) may be a better fit for this work. Hence, I would tend to reject this in favour of more ML-heavy papers. = Minor Comments * It may make sense to compare to https://2019.ase-conferences.org/details/ase-2019-papers/49/DIRE-A-Neural-Approach-to-Decompiled-Identifier-Renaming, which was very recently published (at least I only saw this after the ICLR submission deadline closed, so this is no way meant as a negative) * page 3, second to last paragraph: *basic blocks in ,* but is undefined? * page 4, *Combined Example*: I found the writing slightly misleading here: the extracted path does not show that *(ii) getaddrinfo ... stores the results _in the previously initialized memory_* or *(iii) configure _it_ to be a TCP socket.* The extracted information provides no information about the flow of information or values, which seems to be a weakness of the method. * page 4, last par: - this should be up to ? * page 5, footnote 1: *each call site may participates* -> participate * page 8: *In our approach, there is no need to manually define rules for building the graph as in Allamanis et al. (2018)* - this is confusing to me, as I believe the rules for building the call graph (and abstracting kinds!) to be manually defined?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Reverse Engineering of Stripped Binaries\nWe address the problem of reverse engineering of stripped executables which contain no debug information. This is a challenging problem because of the low amount of syntactic information available in stripped executables, and due to the diverse assembly code patterns arising from compiler optimizations. We present a novel approach for predicting procedure names in stripped executables. Our approach combines static analysis with encoder-decoder-based models. The main idea is to use static analysis to obtain enriched representations of API call sites; encode a set of sequences of these call sites by traversing the Control-Flow Graph; and finally, attend to the encoded sequences while decoding the target name. Our evaluation shows that our model performs predictions that are difficult and time consuming for humans, while improving on the state-of-the-art by 20%.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n= Summary Nero, a method for naming procedures when decompiling binary executables is presented. It consists of a static analysis-based approach to extract a set of sequences of calls to (named) library methods present in the procedure under analysis, and a (mostly standard) seq2seq architecture to predict the procedure name as a sequence of subtokens. Experiments on real-world data show that the method performs well and outperforms some baselines. = Strong/Weak Points + The method seems to work in practice + The writing is reasonably clear, and it feels sufficiently precise to reproduce the results (given enough time) - Little novelty on modeling side (straightforward extension of seq2seq) and data extraction (straightforward application of control flow graphs) - The central novelty in the analysis (extracting *kinds*) seems to contribute very little to results (cf. Tab. 2) - Bad fit for ICLR, as the paper is written with a focus on the PL analysis side = Recommendation Overall, I believe that this is an application with little novelty, and few lessons to be learned for the ICLR audience - a PL or Software Engineering conference (such as VMCAI or FSE) may be a better fit for this work. Hence, I would tend to reject this in favour of more ML-heavy papers. = Minor Comments * It may make sense to compare to https://2019.ase-conferences.org/details/ase-2019-papers/49/DIRE-A-Neural-Approach-to-Decompiled-Identifier-Renaming, which was very recently published (at least I only saw this after the ICLR submission deadline closed, so this is no way meant as a negative) * page 3, second to last paragraph: *basic blocks in ,* but is undefined? * page 4, *Combined Example*: I found the writing slightly misleading here: the extracted path does not show that *(ii) getaddrinfo ... stores the results _in the previously initialized memory_* or *(iii) configure _it_ to be a TCP socket.* The extracted information provides no information about the flow of information or values, which seems to be a weakness of the method. * page 4, last par: - this should be up to ? * page 5, footnote 1: *each call site may participates* -> participate * page 8: *In our approach, there is no need to manually define rules for building the graph as in Allamanis et al. (2018)* - this is confusing to me, as I believe the rules for building the call graph (and abstracting kinds!) to be manually defined?", "role": "assistant" } ]
[ "= Summary Nero, a method for naming procedures when decompiling binary executables is presented.", "It consists of a static analysis-based approach to extract a set of sequences of calls to (named) library methods present in the procedure under analysis, and a (mostly standard) seq2seq architecture to predict the procedure name as a sequence of subtokens.", "Experiments on real-world data show that the method performs well and outperforms some baselines.", "= Strong/Weak Points + The method seems to work in practice + The writing is reasonably clear, and it feels sufficiently precise to reproduce the results (given enough time) - Little novelty on modeling side (straightforward extension of seq2seq) and data extraction (straightforward application of control flow graphs) - The central novelty in the analysis (extracting *kinds*) seems to contribute very little to results (cf.", "Tab.", "2) - Bad fit for ICLR, as the paper is written with a focus on the PL analysis side = Recommendation Overall, I believe that this is an application with little novelty, and few lessons to be learned for the ICLR audience - a PL or Software Engineering conference (such as VMCAI or FSE) may be a better fit for this work.", "Hence, I would tend to reject this in favour of more ML-heavy papers.", "= Minor Comments * It may make sense to compare to https://2019.ase-conferences.org/details/ase-2019-papers/49/DIRE-A-Neural-Approach-to-Decompiled-Identifier-Renaming, which was very recently published (at least I only saw this after the ICLR submission deadline closed, so this is no way meant as a negative) * page 3, second to last paragraph: *basic blocks in ,* but is undefined?", "* page 4, *Combined Example*: I found the writing slightly misleading here: the extracted path does not show that *(ii) getaddrinfo ... stores the results _in the previously initialized memory_* or *(iii) configure _it_ to be a TCP socket.", "* The extracted information provides no information about the flow of information or values, which seems to be a weakness of the method.", "* page 4, last par: - this should be up to ?", "* page 5, footnote 1: *each call site may participates* -> participate * page 8: *In our approach, there is no need to manually define rules for building the graph as in Allamanis et al.", "(2018)* - this is confusing to me, as I believe the rules for building the call graph (and abstracting kinds!)", "to be manually defined?" ]
{ "criticism": 3, "example": 4, "importance_and_relevance": 1, "materials_and_methods": 8, "praise": 1, "presentation_and_reporting": 8, "results_and_discussion": 2, "suggestion_and_solution": 4, "total": 14 }
0.214286
0.285714
0.071429
0.571429
0.071429
0.571429
0.142857
0.285714
14
1
1
1
1
1
1
1
1
2.214286
2.198503
0.015782
ADASAMPLE: ADAPTIVE SAMPLING OF HARD POSITIVES FOR DESCRIPTOR LEARNING
Triplet loss is commonly used in descriptor learning, where the performance heavily relies on mining triplets. Typical solution to that is first picking pairs of intra-class patches (positives) from the dataset to form batches, and then selecting in-batch negatives to construct triplets. For high-informativeness triplet collection, researchers mainly focus on mining hard negatives in the second stage, while they pay relatively less attention to constructing informative batches, i.e., matching pairs are often randomly sampled from the dataset. To address this issue, we propose AdaSample, an adaptive and online batch sampler, in this paper. Specifically, we sample positives based on their informativeness, and formulate our hardness-aware positive mining pipeline within a novel maximum loss minimization training protocol. The efficacy of the proposed method is demonstrated in several standard benchmarks, in which it results in a significant and consistent performance gain on top of the existing strong baselines. The source code and pretrained model will be released upon acceptance.
The paper proposes to make two modifications in hard negative mining procedure for descriptor learning: - instead of selecting hardest samples in a minibatch it proposes to sample proportionally to distance between descriptors - gradient wrt model parameters is weighted inverse proportionally to the distance The authors attempt to analyze the method theoretically and evaluate on two descriptor learning benchmarks, UBC phototour 2011 and HPatches 2017. I propose reject mainly for the following reasons: 1) dataset selection and quality of experimental evaluation 2) the writing (presentation of the results) Regarding (1), my main concern is on statistical significance of presented experiments. The authors conduct each experiment once and do not take into account variance coming from neural net training. For example, in table 2 all numbers seem to be very close so it is not possible to conclude anything. Same is true for other experiments, one way to improve would be to do 10-fold cross validation or at least train 5 network with different random seed and provide mean +- std of the results. Then, I think UBC results are not indicative of performance since the dataset is small and very old at this point, so I disagree with the claim that improvement of ~0.1% is of significant margin. Improvement on HPatches seems to be slightly more significant, but it is not clear wether the authors used comparable networks in terms of number of parameters in the network and descriptor size. Regarding (2), the way the paper is written is very confusing and should be improved. The attempts to have a theoretical derivation of the proposed method are welcome, but should not precede the actual explanation of the method. I suggest to explain clearly the modifications to sampling strategy, and only then relate them to informativeness intuition. Further, the improvement over HardNet is not clear. The two proposed modifications are not tested separately. To have a proper ablation study, the authors could sample negatives proportionally to descriptor distance and remove gradient weighting, and vice versa, to sample hardest-in-batch and only add gradient weighting. I also disagree that hard negative mining is *widely used in object detection and other vision tasks* (quoting the authors). It used to be common in metric learning for face recognition, as the authors cite FaceNet 2015, but has been long abandoned in favor of SphereFace-like approaches. Same is true for object detection, where Fast R-CNN and Mask R-CNN without HNM are widely used.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nADASAMPLE: ADAPTIVE SAMPLING OF HARD POSITIVES FOR DESCRIPTOR LEARNING\nTriplet loss is commonly used in descriptor learning, where the performance heavily relies on mining triplets. Typical solution to that is first picking pairs of intra-class patches (positives) from the dataset to form batches, and then selecting in-batch negatives to construct triplets. For high-informativeness triplet collection, researchers mainly focus on mining hard negatives in the second stage, while they pay relatively less attention to constructing informative batches, i.e., matching pairs are often randomly sampled from the dataset. To address this issue, we propose AdaSample, an adaptive and online batch sampler, in this paper. Specifically, we sample positives based on their informativeness, and formulate our hardness-aware positive mining pipeline within a novel maximum loss minimization training protocol. The efficacy of the proposed method is demonstrated in several standard benchmarks, in which it results in a significant and consistent performance gain on top of the existing strong baselines. The source code and pretrained model will be released upon acceptance.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes to make two modifications in hard negative mining procedure for descriptor learning: - instead of selecting hardest samples in a minibatch it proposes to sample proportionally to distance between descriptors - gradient wrt model parameters is weighted inverse proportionally to the distance The authors attempt to analyze the method theoretically and evaluate on two descriptor learning benchmarks, UBC phototour 2011 and HPatches 2017. I propose reject mainly for the following reasons: 1) dataset selection and quality of experimental evaluation 2) the writing (presentation of the results) Regarding (1), my main concern is on statistical significance of presented experiments. The authors conduct each experiment once and do not take into account variance coming from neural net training. For example, in table 2 all numbers seem to be very close so it is not possible to conclude anything. Same is true for other experiments, one way to improve would be to do 10-fold cross validation or at least train 5 network with different random seed and provide mean +- std of the results. Then, I think UBC results are not indicative of performance since the dataset is small and very old at this point, so I disagree with the claim that improvement of ~0.1% is of significant margin. Improvement on HPatches seems to be slightly more significant, but it is not clear wether the authors used comparable networks in terms of number of parameters in the network and descriptor size. Regarding (2), the way the paper is written is very confusing and should be improved. The attempts to have a theoretical derivation of the proposed method are welcome, but should not precede the actual explanation of the method. I suggest to explain clearly the modifications to sampling strategy, and only then relate them to informativeness intuition. Further, the improvement over HardNet is not clear. The two proposed modifications are not tested separately. To have a proper ablation study, the authors could sample negatives proportionally to descriptor distance and remove gradient weighting, and vice versa, to sample hardest-in-batch and only add gradient weighting. I also disagree that hard negative mining is *widely used in object detection and other vision tasks* (quoting the authors). It used to be common in metric learning for face recognition, as the authors cite FaceNet 2015, but has been long abandoned in favor of SphereFace-like approaches. Same is true for object detection, where Fast R-CNN and Mask R-CNN without HNM are widely used.", "role": "assistant" } ]
[ "The paper proposes to make two modifications in hard negative mining procedure for descriptor learning: - instead of selecting hardest samples in a minibatch it proposes to sample proportionally to distance between descriptors - gradient wrt model parameters is weighted inverse proportionally to the distance The authors attempt to analyze the method theoretically and evaluate on two descriptor learning benchmarks, UBC phototour 2011 and HPatches 2017.", "I propose reject mainly for the following reasons: 1) dataset selection and quality of experimental evaluation 2) the writing (presentation of the results) Regarding (1), my main concern is on statistical significance of presented experiments.", "The authors conduct each experiment once and do not take into account variance coming from neural net training.", "For example, in table 2 all numbers seem to be very close so it is not possible to conclude anything.", "Same is true for other experiments, one way to improve would be to do 10-fold cross validation or at least train 5 network with different random seed and provide mean +- std of the results.", "Then, I think UBC results are not indicative of performance since the dataset is small and very old at this point, so I disagree with the claim that improvement of ~0.1% is of significant margin.", "Improvement on HPatches seems to be slightly more significant, but it is not clear wether the authors used comparable networks in terms of number of parameters in the network and descriptor size.", "Regarding (2), the way the paper is written is very confusing and should be improved.", "The attempts to have a theoretical derivation of the proposed method are welcome, but should not precede the actual explanation of the method.", "I suggest to explain clearly the modifications to sampling strategy, and only then relate them to informativeness intuition.", "Further, the improvement over HardNet is not clear.", "The two proposed modifications are not tested separately.", "To have a proper ablation study, the authors could sample negatives proportionally to descriptor distance and remove gradient weighting, and vice versa, to sample hardest-in-batch and only add gradient weighting.", "I also disagree that hard negative mining is *widely used in object detection and other vision tasks* (quoting the authors).", "It used to be common in metric learning for face recognition, as the authors cite FaceNet 2015, but has been long abandoned in favor of SphereFace-like approaches.", "Same is true for object detection, where Fast R-CNN and Mask R-CNN without HNM are widely used." ]
{ "criticism": 7, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 13, "praise": 1, "presentation_and_reporting": 4, "results_and_discussion": 5, "suggestion_and_solution": 5, "total": 16 }
0.4375
0.0625
0.0625
0.8125
0.0625
0.25
0.3125
0.3125
16
1
1
1
1
1
1
1
1
2.3125
2.170458
0.142042
Increasing batch size through instance repetition improves generalization
Large-batch SGD is important for scaling training of deep neural networks. However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered. We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling for a fixed budget of optimization steps. We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of networks and datasets. Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art. Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently.
The paper proposes a very simple strategy to reduce the variance of a batch of training data. It applies data augmentation operations to expand a training example, so that the mini-batch solely consists of the variants of this example. Since the correlations among examples within the batch is stronger than those in a batch with different examples, the gradient has less variance. While it can reduce the gradient variance during training, which leads to faster convergence, it also has better generalization performance. This is confirmed by empirical studies on difference datasets with a variety of neural networks, such lstm, transformer and convnet. Overall, this seems to be a simple yet effective method to improve the convergence and generalization at the same time, which is orthogonal to existing approaches. It might be better to try this method and improve the state-of-the-art on at least one dateset.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIncreasing batch size through instance repetition improves generalization\nLarge-batch SGD is important for scaling training of deep neural networks. However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered. We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling for a fixed budget of optimization steps. We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of networks and datasets. Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art. Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a very simple strategy to reduce the variance of a batch of training data. It applies data augmentation operations to expand a training example, so that the mini-batch solely consists of the variants of this example. Since the correlations among examples within the batch is stronger than those in a batch with different examples, the gradient has less variance. While it can reduce the gradient variance during training, which leads to faster convergence, it also has better generalization performance. This is confirmed by empirical studies on difference datasets with a variety of neural networks, such lstm, transformer and convnet. Overall, this seems to be a simple yet effective method to improve the convergence and generalization at the same time, which is orthogonal to existing approaches. It might be better to try this method and improve the state-of-the-art on at least one dateset.", "role": "assistant" } ]
[ "The paper proposes a very simple strategy to reduce the variance of a batch of training data.", "It applies data augmentation operations to expand a training example, so that the mini-batch solely consists of the variants of this example.", "Since the correlations among examples within the batch is stronger than those in a batch with different examples, the gradient has less variance.", "While it can reduce the gradient variance during training, which leads to faster convergence, it also has better generalization performance.", "This is confirmed by empirical studies on difference datasets with a variety of neural networks, such lstm, transformer and convnet.", "Overall, this seems to be a simple yet effective method to improve the convergence and generalization at the same time, which is orthogonal to existing approaches.", "It might be better to try this method and improve the state-of-the-art on at least one dateset." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 4, "suggestion_and_solution": 2, "total": 7 }
0
0
0.142857
1
0.142857
0
0.571429
0.285714
7
0
0
1
1
1
0
1
1
2.142857
1.57469
0.568167
Increasing batch size through instance repetition improves generalization
Large-batch SGD is important for scaling training of deep neural networks. However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered. We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling for a fixed budget of optimization steps. We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of networks and datasets. Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art. Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently.
This paper proposes increasing the size of the mini-batches by using existing data augmentation methods performed on the same samples of the mini-batch, called Batch Augmentation (BA). The authors claim that this technique provides better generalization. The improvement is justified by reduced variance of the gradient, therefore it might be relevant that the authors compare (computation resources/performances) with Variance Reduced Gradient baselines (see [1]) and analyze it empirically with a better proxy for it (see below). The authors use the L2 norm to empirically analyze the variance of gradients. As the above justification is an important claim, in my opinion using a better proxy for the variance could be useful: for example measuring the variance exactly as done in [1] or using the second-moment estimate as in [2]. As the idea is fairly simple, and the use case seems somewhat limited for a distributed setting, I could be wrong but, in my opinion, the results are not significantly improving: for example in Fig. 4a the baseline outperforms the validation error of BA in the early iterations and marginally down performs at the later iterations. Also, it is not clear to me if this is due to a poor choice of learning rate for the baseline relative to BA (in this case, could be due to using a larger learning rate then its optimal as the validation error is better in the early iterations). - For a more fair comparison, each plot could also compare with a baseline where the batch size is M*B using no data augmentation (e.g. in Fig.1,2,4). - Page 5, 2nd paragraph: the authors mention that increasing M continues the trend of a reduced validation error, whereas from Fig. 3b we see that increasing M starts to saturate the reduction of the validation error. - In my opinion, more detailed ablation studies could be useful. For example, if Adam is used, it would be interesting to see the average effective learning rates for the parameters throughout iterations. Also, a wall-clock comparison on sequential and on distributed settings would be useful. ---Minor--- - Page 4, Sec.2.1: what do the authors mean by “BA reduces variance less” - Page 5, Fig. 4b: the caption is inconsistent [1] On the Ineffectiveness of Variance Reduced Optimization for Deep Learning, A. Defazio and L. Bottou, 2019. [2] Reducing Noise in GAN Training with Variance Reduced Extragradient, T. Chavdarova, G. Gidel, F. Fleuret, S. Lacoste-Julien, 2019.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIncreasing batch size through instance repetition improves generalization\nLarge-batch SGD is important for scaling training of deep neural networks. However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered. We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling for a fixed budget of optimization steps. We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of networks and datasets. Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art. Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes increasing the size of the mini-batches by using existing data augmentation methods performed on the same samples of the mini-batch, called Batch Augmentation (BA). The authors claim that this technique provides better generalization. The improvement is justified by reduced variance of the gradient, therefore it might be relevant that the authors compare (computation resources/performances) with Variance Reduced Gradient baselines (see [1]) and analyze it empirically with a better proxy for it (see below). The authors use the L2 norm to empirically analyze the variance of gradients. As the above justification is an important claim, in my opinion using a better proxy for the variance could be useful: for example measuring the variance exactly as done in [1] or using the second-moment estimate as in [2]. As the idea is fairly simple, and the use case seems somewhat limited for a distributed setting, I could be wrong but, in my opinion, the results are not significantly improving: for example in Fig. 4a the baseline outperforms the validation error of BA in the early iterations and marginally down performs at the later iterations. Also, it is not clear to me if this is due to a poor choice of learning rate for the baseline relative to BA (in this case, could be due to using a larger learning rate then its optimal as the validation error is better in the early iterations). - For a more fair comparison, each plot could also compare with a baseline where the batch size is M*B using no data augmentation (e.g. in Fig.1,2,4). - Page 5, 2nd paragraph: the authors mention that increasing M continues the trend of a reduced validation error, whereas from Fig. 3b we see that increasing M starts to saturate the reduction of the validation error. - In my opinion, more detailed ablation studies could be useful. For example, if Adam is used, it would be interesting to see the average effective learning rates for the parameters throughout iterations. Also, a wall-clock comparison on sequential and on distributed settings would be useful. ---Minor--- - Page 4, Sec.2.1: what do the authors mean by “BA reduces variance less” - Page 5, Fig. 4b: the caption is inconsistent [1] On the Ineffectiveness of Variance Reduced Optimization for Deep Learning, A. Defazio and L. Bottou, 2019. [2] Reducing Noise in GAN Training with Variance Reduced Extragradient, T. Chavdarova, G. Gidel, F. Fleuret, S. Lacoste-Julien, 2019.", "role": "assistant" } ]
[ "This paper proposes increasing the size of the mini-batches by using existing data augmentation methods performed on the same samples of the mini-batch, called Batch Augmentation (BA).", "The authors claim that this technique provides better generalization.", "The improvement is justified by reduced variance of the gradient, therefore it might be relevant that the authors compare (computation resources/performances) with Variance Reduced Gradient baselines (see [1]) and analyze it empirically with a better proxy for it (see below).", "The authors use the L2 norm to empirically analyze the variance of gradients.", "As the above justification is an important claim, in my opinion using a better proxy for the variance could be useful: for example measuring the variance exactly as done in [1] or using the second-moment estimate as in [2].", "As the idea is fairly simple, and the use case seems somewhat limited for a distributed setting, I could be wrong but, in my opinion, the results are not significantly improving: for example in Fig.", "4a the baseline outperforms the validation error of BA in the early iterations and marginally down performs at the later iterations.", "Also, it is not clear to me if this is due to a poor choice of learning rate for the baseline relative to BA (in this case, could be due to using a larger learning rate then its optimal as the validation error is better in the early iterations).", "- For a more fair comparison, each plot could also compare with a baseline where the batch size is M*B using no data augmentation (e.g.", "in Fig.1,2,4).", "- Page 5, 2nd paragraph: the authors mention that increasing M continues the trend of a reduced validation error, whereas from Fig.", "3b we see that increasing M starts to saturate the reduction of the validation error.", "- In my opinion, more detailed ablation studies could be useful.", "For example, if Adam is used, it would be interesting to see the average effective learning rates for the parameters throughout iterations.", "Also, a wall-clock comparison on sequential and on distributed settings would be useful.", "---Minor--- - Page 4, Sec.2.1: what do the authors mean by “BA reduces variance less” - Page 5, Fig.", "4b: the caption is inconsistent [1] On the Ineffectiveness of Variance Reduced Optimization for Deep Learning, A. Defazio and L. Bottou, 2019.", "[2] Reducing Noise in GAN Training with Variance Reduced Extragradient, T. Chavdarova, G. Gidel, F. Fleuret, S. Lacoste-Julien, 2019." ]
{ "criticism": 3, "example": 4, "importance_and_relevance": 2, "materials_and_methods": 16, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 9, "suggestion_and_solution": 7, "total": 18 }
0.166667
0.222222
0.111111
0.888889
0.055556
0.166667
0.5
0.388889
18
1
1
1
1
1
1
1
1
2.5
2.105439
0.394561
Characterizing convolutional neural networks with one-pixel signature
We propose a new representation, one-pixel signature, that can be used to reveal the characteristics of the convolution neural networks (CNNs). Here, each CNN classifier is associated with a signature that is created by generating, pixel-by-pixel, an adversarial value that is the result of the largest change to the class prediction. The one-pixel signature is agnostic to the design choices of CNN architectures such as type, depth, activation function, and how they were trained. It can be computed efficiently for a black-box classifier without accessing the network parameters. Classic networks such as LetNet, VGG, AlexNet, and ResNet demonstrate different characteristics in their signature images. For application, we focus on the classifier backdoor detection problem where a CNN classifier has been maliciously inserted with an unknown Trojan. We show the effectiveness of the one-pixel signature in detecting backdoored CNN. Our proposed one-pixel signature representation is general and it can be applied in problems where discriminative classifiers, particularly neural network based, are to be characterized.
The paper describes a systemic approach for detecting the type of a classifier based on the computation and use of a per-pixel signature image feature. It claims that a specific classifier, e.g., a VGG, or other type of CNN, will produce a different signature image from another classifier, or from the same classifier when trained with manipulated data, e.g., by an unknown Trojan. Some results are shown that illustrate how this signature feature can be used to differentiate between 4 different CNNs, or when data are manipulated. Although the paper has some merit, in creating and using the specific signature, there is no deeper analysis of why and how this feature can differentiate between classifiers, or of the dependence on the problem under investigation. It cannot be claimed to produce a robust feature that can be applied to quite diverse cases and environments (the inability to handle, for example all-to-all attacks is admitted in the paper). Moreover, the title is misleading, since the method does not necessarily refers to CNNs, but in general to some classifier.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCharacterizing convolutional neural networks with one-pixel signature\nWe propose a new representation, one-pixel signature, that can be used to reveal the characteristics of the convolution neural networks (CNNs). Here, each CNN classifier is associated with a signature that is created by generating, pixel-by-pixel, an adversarial value that is the result of the largest change to the class prediction. The one-pixel signature is agnostic to the design choices of CNN architectures such as type, depth, activation function, and how they were trained. It can be computed efficiently for a black-box classifier without accessing the network parameters. Classic networks such as LetNet, VGG, AlexNet, and ResNet demonstrate different characteristics in their signature images. For application, we focus on the classifier backdoor detection problem where a CNN classifier has been maliciously inserted with an unknown Trojan. We show the effectiveness of the one-pixel signature in detecting backdoored CNN. Our proposed one-pixel signature representation is general and it can be applied in problems where discriminative classifiers, particularly neural network based, are to be characterized.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper describes a systemic approach for detecting the type of a classifier based on the computation and use of a per-pixel signature image feature. It claims that a specific classifier, e.g., a VGG, or other type of CNN, will produce a different signature image from another classifier, or from the same classifier when trained with manipulated data, e.g., by an unknown Trojan. Some results are shown that illustrate how this signature feature can be used to differentiate between 4 different CNNs, or when data are manipulated. Although the paper has some merit, in creating and using the specific signature, there is no deeper analysis of why and how this feature can differentiate between classifiers, or of the dependence on the problem under investigation. It cannot be claimed to produce a robust feature that can be applied to quite diverse cases and environments (the inability to handle, for example all-to-all attacks is admitted in the paper). Moreover, the title is misleading, since the method does not necessarily refers to CNNs, but in general to some classifier.", "role": "assistant" } ]
[ "The paper describes a systemic approach for detecting the type of a classifier based on the computation and use of a per-pixel signature image feature.", "It claims that a specific classifier, e.g., a VGG, or other type of CNN, will produce a different signature image from another classifier, or from the same classifier when trained with manipulated data, e.g., by an unknown Trojan.", "Some results are shown that illustrate how this signature feature can be used to differentiate between 4 different CNNs, or when data are manipulated.", "Although the paper has some merit, in creating and using the specific signature, there is no deeper analysis of why and how this feature can differentiate between classifiers, or of the dependence on the problem under investigation.", "It cannot be claimed to produce a robust feature that can be applied to quite diverse cases and environments (the inability to handle, for example all-to-all attacks is admitted in the paper).", "Moreover, the title is misleading, since the method does not necessarily refers to CNNs, but in general to some classifier." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 0, "total": 6 }
0.333333
0
0.166667
1
0.166667
0.166667
0.333333
0
6
1
0
1
1
1
1
1
0
2.166667
1.393328
0.773339
Variational inference of latent hierarchical dynamical systems in neuroscience: an application to calcium imaging data
A key problem in neuroscience, and life sciences more generally, is that data is generated by a hierarchy of dynamical systems. One example of this is in extit{in-vivo} calcium imaging data, where data is generated by a lower-order dynamical system governing calcium flux in neurons, which itself is driven by a higher-order dynamical system of neural computation. Ideally, life scientists would be able to infer the dynamics of both the lower-order systems and the higher-order systems, but this is difficult in high-dimensional regimes. A recent approach using sequential variational auto-encoders demonstrated it was possible to learn the latent dynamics of a single dynamical system for computations during reaching behaviour in the brain, using spiking data modelled as a Poisson process. Here we extend this approach using a ladder method to infer a hierarchy of dynamical systems, allowing us to capture calcium dynamics as well as neural computation. In this approach, spiking events drive lower-order calcium dynamics, and are themselves controlled by a higher-order latent dynamical system. We generate synthetic data by generating firing rates, sampling spike trains, and converting spike trains to fluorescence transients, from two dynamical systems that have been used as key benchmarks in recent literature: a Lorenz attractor, and a chaotic recurrent neural network. We show that our model is better able to reconstruct Lorenz dynamics from fluorescence data than competing methods. However, though our model can reconstruct underlying spike rates and calcium transients from the chaotic neural network well, it does not perform as well at reconstructing firing rates as basic techniques for inferring spikes from calcium data. These results demonstrate that VLAEs are a promising approach for modelling hierarchical dynamical systems data in the life sciences, but that inferring the dynamics of lower-order systems can potentially be better achieved with simpler methods.
The paper contributes to the space of hierarchical latent modeling with applications to neuroscience as follows: 1) A model combining variational ladder auto-encoders with RNNs is presented; 2) An inference method based on ELBO maximization is presented; 3) Results on synthetic different data sets show good performance although firing rates are reconstructed better by another approach on a key task (firing rate reconstruction). The paper is fairly well written but it seems that the last paragraph and conclusion have been written with more haste (also a period is missing at the beginning of the last sentence of the first paragraph in the conclusion). I am a bit skeptical about whether to accept the paper. Although hierarchical latent modeling is interesting and producing models and training approaches that are modular and composable is valuable, it is not clear here that the model under consideration really provides strong improvements on the state of the art. Furthermore, it appears that the set of benchmarks being considered is narrowly focused on a subfield of neuroscience without providing clear wins on any task involving actual data. As the theoretical contribution of the paper is rather weak and the empirical results are not very strong, I recommend a weak reject. I believe that a stronger experimental section could help the authors increase the impact of their paper
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVariational inference of latent hierarchical dynamical systems in neuroscience: an application to calcium imaging data\nA key problem in neuroscience, and life sciences more generally, is that data is generated by a hierarchy of dynamical systems. One example of this is in \textit{in-vivo} calcium imaging data, where data is generated by a lower-order dynamical system governing calcium flux in neurons, which itself is driven by a higher-order dynamical system of neural computation. Ideally, life scientists would be able to infer the dynamics of both the lower-order systems and the higher-order systems, but this is difficult in high-dimensional regimes. A recent approach using sequential variational auto-encoders demonstrated it was possible to learn the latent dynamics of a single dynamical system for computations during reaching behaviour in the brain, using spiking data modelled as a Poisson process. Here we extend this approach using a ladder method to infer a hierarchy of dynamical systems, allowing us to capture calcium dynamics as well as neural computation. In this approach, spiking events drive lower-order calcium dynamics, and are themselves controlled by a higher-order latent dynamical system. We generate synthetic data by generating firing rates, sampling spike trains, and converting spike trains to fluorescence transients, from two dynamical systems that have been used as key benchmarks in recent literature: a Lorenz attractor, and a chaotic recurrent neural network. We show that our model is better able to reconstruct Lorenz dynamics from fluorescence data than competing methods. However, though our model can reconstruct underlying spike rates and calcium transients from the chaotic neural network well, it does not perform as well at reconstructing firing rates as basic techniques for inferring spikes from calcium data. These results demonstrate that VLAEs are a promising approach for modelling hierarchical dynamical systems data in the life sciences, but that inferring the dynamics of lower-order systems can potentially be better achieved with simpler methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper contributes to the space of hierarchical latent modeling with applications to neuroscience as follows: 1) A model combining variational ladder auto-encoders with RNNs is presented; 2) An inference method based on ELBO maximization is presented; 3) Results on synthetic different data sets show good performance although firing rates are reconstructed better by another approach on a key task (firing rate reconstruction). The paper is fairly well written but it seems that the last paragraph and conclusion have been written with more haste (also a period is missing at the beginning of the last sentence of the first paragraph in the conclusion). I am a bit skeptical about whether to accept the paper. Although hierarchical latent modeling is interesting and producing models and training approaches that are modular and composable is valuable, it is not clear here that the model under consideration really provides strong improvements on the state of the art. Furthermore, it appears that the set of benchmarks being considered is narrowly focused on a subfield of neuroscience without providing clear wins on any task involving actual data. As the theoretical contribution of the paper is rather weak and the empirical results are not very strong, I recommend a weak reject. I believe that a stronger experimental section could help the authors increase the impact of their paper", "role": "assistant" } ]
[ "The paper contributes to the space of hierarchical latent modeling with applications to neuroscience as follows: 1) A model combining variational ladder auto-encoders with RNNs is presented; 2) An inference method based on ELBO maximization is presented; 3) Results on synthetic different data sets show good performance although firing rates are reconstructed better by another approach on a key task (firing rate reconstruction).", "The paper is fairly well written but it seems that the last paragraph and conclusion have been written with more haste (also a period is missing at the beginning of the last sentence of the first paragraph in the conclusion).", "I am a bit skeptical about whether to accept the paper.", "Although hierarchical latent modeling is interesting and producing models and training approaches that are modular and composable is valuable, it is not clear here that the model under consideration really provides strong improvements on the state of the art.", "Furthermore, it appears that the set of benchmarks being considered is narrowly focused on a subfield of neuroscience without providing clear wins on any task involving actual data.", "As the theoretical contribution of the paper is rather weak and the empirical results are not very strong, I recommend a weak reject.", "I believe that a stronger experimental section could help the authors increase the impact of their paper" ]
{ "criticism": 5, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 2, "total": 7 }
0.714286
0
0.285714
0.571429
0.428571
0.142857
0.428571
0.285714
7
1
0
1
1
1
1
1
1
2.857143
2.288975
0.568167
Domain Adaptation Through Label Propagation: Learning Clustered and Aligned Features
The difficulty of obtaining sufficient labeled data for supervised learning has motivated domain adaptation, in which a classifier is trained in one domain, source domain, but operates in another, target domain. Reducing domain discrepancy has improved the performance, but it is hampered by the embedded features that do not form clearly separable and aligned clusters. We address this issue by propagating labels using a manifold structure, and by enforcing cycle consistency to align the clusters of features in each domain more closely. Specifically, we prove that cycle consistency leads the embedded features distant from all but one clusters if the source domain is ideally clustered. We additionally utilize more information from approximated local manifold and pursue local manifold consistency for more improvement. Results for various domain adaptation scenarios show tighter clustering and an improvement in classification accuracy.
This paper proposes an interesting algorithm towards domain adaptation that combines graph learning and label propagation along with cycle consistency. The proposed model is simple and interesting making clever use of labels in the source domain. While the theory is motivated well and seems convincing, I am not entirely sure of the empirical results. There are no significant improvements of the proposed model over the baselines considered except on STFL-CIFAR data set. An analysis of why this improvement is observed would be good. I also wonder on how did the cluster quality on the target domain compare with that obtained by the AssocDA baseline. Also coming to the parameter sigma in the Gaussian similarity function, it would be nice to see the values of sigma reported in either the body of the paper or in a supplement. I will admit I have not thoroughly checked the math in Theorem 1, but the supporting mathematical equations seem plausible.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDomain Adaptation Through Label Propagation: Learning Clustered and Aligned Features\nThe difficulty of obtaining sufficient labeled data for supervised learning has motivated domain adaptation, in which a classifier is trained in one domain, source domain, but operates in another, target domain. Reducing domain discrepancy has improved the performance, but it is hampered by the embedded features that do not form clearly separable and aligned clusters. We address this issue by propagating labels using a manifold structure, and by enforcing cycle consistency to align the clusters of features in each domain more closely. Specifically, we prove that cycle consistency leads the embedded features distant from all but one clusters if the source domain is ideally clustered. We additionally utilize more information from approximated local manifold and pursue local manifold consistency for more improvement. Results for various domain adaptation scenarios show tighter clustering and an improvement in classification accuracy.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes an interesting algorithm towards domain adaptation that combines graph learning and label propagation along with cycle consistency. The proposed model is simple and interesting making clever use of labels in the source domain. While the theory is motivated well and seems convincing, I am not entirely sure of the empirical results. There are no significant improvements of the proposed model over the baselines considered except on STFL-CIFAR data set. An analysis of why this improvement is observed would be good. I also wonder on how did the cluster quality on the target domain compare with that obtained by the AssocDA baseline. Also coming to the parameter sigma in the Gaussian similarity function, it would be nice to see the values of sigma reported in either the body of the paper or in a supplement. I will admit I have not thoroughly checked the math in Theorem 1, but the supporting mathematical equations seem plausible.", "role": "assistant" } ]
[ "This paper proposes an interesting algorithm towards domain adaptation that combines graph learning and label propagation along with cycle consistency.", "The proposed model is simple and interesting making clever use of labels in the source domain.", "While the theory is motivated well and seems convincing, I am not entirely sure of the empirical results.", "There are no significant improvements of the proposed model over the baselines considered except on STFL-CIFAR data set.", "An analysis of why this improvement is observed would be good.", "I also wonder on how did the cluster quality on the target domain compare with that obtained by the AssocDA baseline.", "Also coming to the parameter sigma in the Gaussian similarity function, it would be nice to see the values of sigma reported in either the body of the paper or in a supplement.", "I will admit I have not thoroughly checked the math in Theorem 1, but the supporting mathematical equations seem plausible." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 8 }
0.375
0
0.25
0.75
0.5
0.25
0.25
0.25
8
1
0
1
1
1
1
1
1
2.625
2.230439
0.394561
MetaPoison: Learning to craft adversarial poisoning examples via meta-learning
We consider a new class of emph{data poisoning} attacks on neural networks, in which the attacker takes control of a model by making small perturbations to a subset of its training data. We formulate the task of finding poisons as a bi-level optimization problem, which can be solved using methods borrowed from the meta-learning community. Unlike previous poisoning strategies, the meta-poisoning can poison networks that are trained from scratch using an initialization unknown to the attacker and transfer across hyperparameters. Further we show that our attacks are more versatile: they can cause misclassification of the target image into an arbitrarily chosen class. Our results show above 50% attack success rate when poisoning just 3-10% of the training dataset.
This paper introduces a novel class of clean-label attacks of deep model training procedures (from scratch) using a meta-learning inspired approach. Pros: 1) Transferability across different training strategies is a particularly important novelty of this work, I think. 2) I find it particularly impressive that a clean-label attack is shown to work when training from scratch, which reduces the amount of knowledge about the model needed by the attacker. 3) The meta-learning formulation presented here may prove to be the starting point of many future investigations. 4) Multi-targeted attacks and third-party attacks appear to be novel contributions of this paper, but there is too little context given to judge that from the text alone. Cons: 1) The figures are not too informative. Could you please provide more examples in an appendix? 2) The writing could be much improved, especially for making the paper more accessible to a wider audience. 3) Details of the meta-learning procedure and related works are almost non-existent. While the authors provide a discussion of what*s possible and challenging for current adversarial attack methods, many details and citations are missing. This makes the many comments and explanations in the text obscure, although they may be correct and insightful. Could you please work on the writing and assume that I know nothing of the SOTA in adversarial attacks? I am willing to read the paper again. 4) There is far too little detail about experimental procedures. Please provide a detailed discussion with references to established procedure or previous works! - What are the meta-training/meta-validation and meta-testing sets? - What is a reasonable baseline you could compare against? If nothing else, please argue for that in the paper and perhaps provide an ablation of your method as a baseline. As far as I can tell, the paper may be introducing an approach with potential, but many clarifications are required. The paper is not self-contained enough to even attempt to guess how important or novel the presented results are. Hence, all I can do is judge the little I have managed to understand from the paper. While this may be enough for a workshop, I cannot at this point recommend acceptance, primarily because of the writing.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMetaPoison: Learning to craft adversarial poisoning examples via meta-learning\nWe consider a new class of emph{data poisoning} attacks on neural networks, in which the attacker takes control of a model by making small perturbations to a subset of its training data. We formulate the task of finding poisons as a bi-level optimization problem, which can be solved using methods borrowed from the meta-learning community. Unlike previous poisoning strategies, the meta-poisoning can poison networks that are trained from scratch using an initialization unknown to the attacker and transfer across hyperparameters. Further we show that our attacks are more versatile: they can cause misclassification of the target image into an arbitrarily chosen class. Our results show above 50% attack success rate when poisoning just 3-10% of the training dataset.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a novel class of clean-label attacks of deep model training procedures (from scratch) using a meta-learning inspired approach. Pros: 1) Transferability across different training strategies is a particularly important novelty of this work, I think. 2) I find it particularly impressive that a clean-label attack is shown to work when training from scratch, which reduces the amount of knowledge about the model needed by the attacker. 3) The meta-learning formulation presented here may prove to be the starting point of many future investigations. 4) Multi-targeted attacks and third-party attacks appear to be novel contributions of this paper, but there is too little context given to judge that from the text alone. Cons: 1) The figures are not too informative. Could you please provide more examples in an appendix? 2) The writing could be much improved, especially for making the paper more accessible to a wider audience. 3) Details of the meta-learning procedure and related works are almost non-existent. While the authors provide a discussion of what*s possible and challenging for current adversarial attack methods, many details and citations are missing. This makes the many comments and explanations in the text obscure, although they may be correct and insightful. Could you please work on the writing and assume that I know nothing of the SOTA in adversarial attacks? I am willing to read the paper again. 4) There is far too little detail about experimental procedures. Please provide a detailed discussion with references to established procedure or previous works! - What are the meta-training/meta-validation and meta-testing sets? - What is a reasonable baseline you could compare against? If nothing else, please argue for that in the paper and perhaps provide an ablation of your method as a baseline. As far as I can tell, the paper may be introducing an approach with potential, but many clarifications are required. The paper is not self-contained enough to even attempt to guess how important or novel the presented results are. Hence, all I can do is judge the little I have managed to understand from the paper. While this may be enough for a workshop, I cannot at this point recommend acceptance, primarily because of the writing.", "role": "assistant" } ]
[ "This paper introduces a novel class of clean-label attacks of deep model training procedures (from scratch) using a meta-learning inspired approach.", "Pros: 1) Transferability across different training strategies is a particularly important novelty of this work, I think.", "2) I find it particularly impressive that a clean-label attack is shown to work when training from scratch, which reduces the amount of knowledge about the model needed by the attacker.", "3) The meta-learning formulation presented here may prove to be the starting point of many future investigations.", "4) Multi-targeted attacks and third-party attacks appear to be novel contributions of this paper, but there is too little context given to judge that from the text alone.", "Cons: 1) The figures are not too informative.", "Could you please provide more examples in an appendix?", "2) The writing could be much improved, especially for making the paper more accessible to a wider audience.", "3) Details of the meta-learning procedure and related works are almost non-existent.", "While the authors provide a discussion of what*s possible and challenging for current adversarial attack methods, many details and citations are missing.", "This makes the many comments and explanations in the text obscure, although they may be correct and insightful.", "Could you please work on the writing and assume that I know nothing of the SOTA in adversarial attacks?", "I am willing to read the paper again.", "4) There is far too little detail about experimental procedures.", "Please provide a detailed discussion with references to established procedure or previous works!", "- What are the meta-training/meta-validation and meta-testing sets?", "- What is a reasonable baseline you could compare against?", "If nothing else, please argue for that in the paper and perhaps provide an ablation of your method as a baseline.", "As far as I can tell, the paper may be introducing an approach with potential, but many clarifications are required.", "The paper is not self-contained enough to even attempt to guess how important or novel the presented results are.", "Hence, all I can do is judge the little I have managed to understand from the paper.", "While this may be enough for a workshop, I cannot at this point recommend acceptance, primarily because of the writing." ]
{ "criticism": 8, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 12, "praise": 2, "presentation_and_reporting": 12, "results_and_discussion": 2, "suggestion_and_solution": 7, "total": 22 }
0.363636
0
0.227273
0.545455
0.090909
0.545455
0.090909
0.318182
22
1
0
1
1
1
1
1
1
2.181818
0.903441
1.278377
The Blessing of Dimensionality: An Empirical Study of Generalization
The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive. Numerous rigorous attempts have been made to explain generalization, but available bounds are still quite loose, and analysis does not always lead to true understanding. The goal of this work is to make generalization more intuitive. Using visualization methods, we discuss the mystery of generalization, the geometry of loss landscapes, and how the curse (or, rather, the blessing) of dimensionality causes optimizers to settle into minima that generalize well.
In this paper the authors aim to develop an intuition for why neural networks (NNs) generalize so well, despite often being overparametrized for the given problem. The paper is well written exploring a very interesting issue. It provides a good discussion of why generalization is puzzling, related work on the attempts to improve generalization bounds for NNs, and qualitative discussion of possibly answers to this problem. Despite this, we feel that a much more experimentation should be done to address some of the hypothesis in the paper. Also, it would have been helpful, if some of the results and figures were presented much earlier in the paper. What follows are the pros and cons / discussion points / styling suggestions the authors could choose to address: [pros] - Well-written - A good survey of related literature and concepts - Interesting visualizations [cons / points / styling] - The placement of Figure 1 seems arbitrary as you refer to it for the first time on Page 3. This would not be such a big problem; however, it divides the paragraph in two without need to. Could you perhaps place the figure and caption on the bottom of the page? - Is there a reference for “implicit regularization”? - On page 4, it would have been helpful to provide examples of “linking the intuitive arguments back to theory” - It would be helpful to provide more details on the filter-normalization scheme. - On page 4, could you explain what you mean by “wide margin classification is replaced with an implicit regulation (regularization - ?) that promotes the closely related notion of ‘flatness’ “. How is “flatness” encouraged as a prior? Through the choice of an appropriate loss function? Would it be perhaps more suited for a longer format, e.g. Journal of Machine Leaning Research?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Blessing of Dimensionality: An Empirical Study of Generalization\nThe power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive. Numerous rigorous attempts have been made to explain generalization, but available bounds are still quite loose, and analysis does not always lead to true understanding. The goal of this work is to make generalization more intuitive. Using visualization methods, we discuss the mystery of generalization, the geometry of loss landscapes, and how the curse (or, rather, the blessing) of dimensionality causes optimizers to settle into minima that generalize well.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nIn this paper the authors aim to develop an intuition for why neural networks (NNs) generalize so well, despite often being overparametrized for the given problem. The paper is well written exploring a very interesting issue. It provides a good discussion of why generalization is puzzling, related work on the attempts to improve generalization bounds for NNs, and qualitative discussion of possibly answers to this problem. Despite this, we feel that a much more experimentation should be done to address some of the hypothesis in the paper. Also, it would have been helpful, if some of the results and figures were presented much earlier in the paper. What follows are the pros and cons / discussion points / styling suggestions the authors could choose to address: [pros] - Well-written - A good survey of related literature and concepts - Interesting visualizations [cons / points / styling] - The placement of Figure 1 seems arbitrary as you refer to it for the first time on Page 3. This would not be such a big problem; however, it divides the paragraph in two without need to. Could you perhaps place the figure and caption on the bottom of the page? - Is there a reference for “implicit regularization”? - On page 4, it would have been helpful to provide examples of “linking the intuitive arguments back to theory” - It would be helpful to provide more details on the filter-normalization scheme. - On page 4, could you explain what you mean by “wide margin classification is replaced with an implicit regulation (regularization - ?) that promotes the closely related notion of ‘flatness’ “. How is “flatness” encouraged as a prior? Through the choice of an appropriate loss function? Would it be perhaps more suited for a longer format, e.g. Journal of Machine Leaning Research?", "role": "assistant" } ]
[ "In this paper the authors aim to develop an intuition for why neural networks (NNs) generalize so well, despite often being overparametrized for the given problem.", "The paper is well written exploring a very interesting issue.", "It provides a good discussion of why generalization is puzzling, related work on the attempts to improve generalization bounds for NNs, and qualitative discussion of possibly answers to this problem.", "Despite this, we feel that a much more experimentation should be done to address some of the hypothesis in the paper.", "Also, it would have been helpful, if some of the results and figures were presented much earlier in the paper.", "What follows are the pros and cons / discussion points / styling suggestions the authors could choose to address: [pros] - Well-written - A good survey of related literature and concepts - Interesting visualizations [cons / points / styling] - The placement of Figure 1 seems arbitrary as you refer to it for the first time on Page 3.", "This would not be such a big problem; however, it divides the paragraph in two without need to.", "Could you perhaps place the figure and caption on the bottom of the page?", "- Is there a reference for “implicit regularization”?", "- On page 4, it would have been helpful to provide examples of “linking the intuitive arguments back to theory” - It would be helpful to provide more details on the filter-normalization scheme.", "- On page 4, could you explain what you mean by “wide margin classification is replaced with an implicit regulation (regularization - ?)", "that promotes the closely related notion of ‘flatness’ “.", "How is “flatness” encouraged as a prior?", "Through the choice of an appropriate loss function?", "Would it be perhaps more suited for a longer format, e.g.", "Journal of Machine Leaning Research?" ]
{ "criticism": 1, "example": 3, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 8, "results_and_discussion": 5, "suggestion_and_solution": 8, "total": 16 }
0.0625
0.1875
0.125
0.3125
0.1875
0.5
0.3125
0.5
16
1
1
1
1
1
1
1
1
2.1875
2.045458
0.142042
Posterior Sampling: Make Reinforcement Learning Sample Efficient Again
Machine learning thrives on leveraging structure in data, and many breakthroughs (e.g. convolutional networks) have been made by designing algorithms which exploit the underlying structure of a distribution. Reinforcement Learning agents interact with worlds that are similarly full of structure. For example, no sequence of actions an agent takes will ever cause the laws of physics to change, therefore an agent which learns to generalize such laws through time and space will have an advantage. Sample efficient reinforcement learning can be accomplished when assuming that the world has structure and designing learning algorithms which exploit this assumption without knowing the actual structure beforehand. Posterior Sampling for Reinforcement Learning (PSRL) citep{strens2000bayesian} is such a method which assumes structure in the world and exploits it for learning. A PSLR learning agent first samples models of the environment which conform to both prior assumptions on the world*s structure and past observations and then interacts with the true environment using a policy guided by the sampled model of the environment. While PSRL delivers theoretical Bayesian regret bounds, there are many open issues which must be addressed before PSRL can be applied to current benchmark continuous reinforcement reinforcement tasks. In this work, we identify these issues and find practical solutions to them leading to a novel algorithm we call Neural-PSRL. We validate the algorithm*s effectiveness by achieving state of the art results in the HalfCheetah-v3 and Hopper-v3 domains.
The paper suggests a practical implementation of Posterior Sampling for Reinforcement Learning (PSLR) using deep networks. The PSLR framework assumes that the model of the environment is sampled from a relatively lower dimensional distribution (for example because of some inherent structure they posses), and exploits this to reduce the sampling complexity to learn an optimal policy, so that the number of samples required is bounded by the complexity of the model distribution, and not by the dimensionality of the state or action space. The paper is well motivated and well written, and the research direction is indeed interesting. I would like to see the implementation section pushing further the limits of the theory by using a more complex posterior model – rather than a simple linear layer – and by testing on more challenging environments, where explicitly exploiting the structure of the non-linear model is essential to solve the task. Regarding the implementation, in Section 4.2 the model is parametrized as s* = W z + s, where W is sampled from the posterior and z is the output of a feed-forward network. Could you further comment on which data the feed-forward network is trained on? Does the theoretical framework remain valid if the feed-forward network is trained only on states from the environment on which the PSLR will later be tested? Wouldn*t that imply that the prior model distribution is already biased toward the particular environment?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPosterior Sampling: Make Reinforcement Learning Sample Efficient Again\nMachine learning thrives on leveraging structure in data, and many breakthroughs (e.g. convolutional networks) have been made by designing algorithms which exploit the underlying structure of a distribution. Reinforcement Learning agents interact with worlds that are similarly full of structure. For example, no sequence of actions an agent takes will ever cause the laws of physics to change, therefore an agent which learns to generalize such laws through time and space will have an advantage. Sample efficient reinforcement learning can be accomplished when assuming that the world has structure and designing learning algorithms which exploit this assumption without knowing the actual structure beforehand. Posterior Sampling for Reinforcement Learning (PSRL) citep{strens2000bayesian} is such a method which assumes structure in the world and exploits it for learning. A PSLR learning agent first samples models of the environment which conform to both prior assumptions on the world*s structure and past observations and then interacts with the true environment using a policy guided by the sampled model of the environment. While PSRL delivers theoretical Bayesian regret bounds, there are many open issues which must be addressed before PSRL can be applied to current benchmark continuous reinforcement reinforcement tasks. In this work, we identify these issues and find practical solutions to them leading to a novel algorithm we call Neural-PSRL. We validate the algorithm*s effectiveness by achieving state of the art results in the HalfCheetah-v3 and Hopper-v3 domains.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper suggests a practical implementation of Posterior Sampling for Reinforcement Learning (PSLR) using deep networks. The PSLR framework assumes that the model of the environment is sampled from a relatively lower dimensional distribution (for example because of some inherent structure they posses), and exploits this to reduce the sampling complexity to learn an optimal policy, so that the number of samples required is bounded by the complexity of the model distribution, and not by the dimensionality of the state or action space. The paper is well motivated and well written, and the research direction is indeed interesting. I would like to see the implementation section pushing further the limits of the theory by using a more complex posterior model – rather than a simple linear layer – and by testing on more challenging environments, where explicitly exploiting the structure of the non-linear model is essential to solve the task. Regarding the implementation, in Section 4.2 the model is parametrized as s* = W z + s, where W is sampled from the posterior and z is the output of a feed-forward network. Could you further comment on which data the feed-forward network is trained on? Does the theoretical framework remain valid if the feed-forward network is trained only on states from the environment on which the PSLR will later be tested? Wouldn*t that imply that the prior model distribution is already biased toward the particular environment?", "role": "assistant" } ]
[ "The paper suggests a practical implementation of Posterior Sampling for Reinforcement Learning (PSLR) using deep networks.", "The PSLR framework assumes that the model of the environment is sampled from a relatively lower dimensional distribution (for example because of some inherent structure they posses), and exploits this to reduce the sampling complexity to learn an optimal policy, so that the number of samples required is bounded by the complexity of the model distribution, and not by the dimensionality of the state or action space.", "The paper is well motivated and well written, and the research direction is indeed interesting.", "I would like to see the implementation section pushing further the limits of the theory by using a more complex posterior model – rather than a simple linear layer – and by testing on more challenging environments, where explicitly exploiting the structure of the non-linear model is essential to solve the task.", "Regarding the implementation, in Section 4.2 the model is parametrized as s* = W z + s, where W is sampled from the posterior and z is the output of a feed-forward network.", "Could you further comment on which data the feed-forward network is trained on?", "Does the theoretical framework remain valid if the feed-forward network is trained only on states from the environment on which the PSLR will later be tested?", "Wouldn*t that imply that the prior model distribution is already biased toward the particular environment?" ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 5, "suggestion_and_solution": 2, "total": 8 }
0
0
0.125
0.875
0.125
0.125
0.625
0.25
8
0
0
1
1
1
1
1
1
2.125
1.730439
0.394561
Posterior Sampling: Make Reinforcement Learning Sample Efficient Again
Machine learning thrives on leveraging structure in data, and many breakthroughs (e.g. convolutional networks) have been made by designing algorithms which exploit the underlying structure of a distribution. Reinforcement Learning agents interact with worlds that are similarly full of structure. For example, no sequence of actions an agent takes will ever cause the laws of physics to change, therefore an agent which learns to generalize such laws through time and space will have an advantage. Sample efficient reinforcement learning can be accomplished when assuming that the world has structure and designing learning algorithms which exploit this assumption without knowing the actual structure beforehand. Posterior Sampling for Reinforcement Learning (PSRL) citep{strens2000bayesian} is such a method which assumes structure in the world and exploits it for learning. A PSLR learning agent first samples models of the environment which conform to both prior assumptions on the world*s structure and past observations and then interacts with the true environment using a policy guided by the sampled model of the environment. While PSRL delivers theoretical Bayesian regret bounds, there are many open issues which must be addressed before PSRL can be applied to current benchmark continuous reinforcement reinforcement tasks. In this work, we identify these issues and find practical solutions to them leading to a novel algorithm we call Neural-PSRL. We validate the algorithm*s effectiveness by achieving state of the art results in the HalfCheetah-v3 and Hopper-v3 domains.
Posterior Sampling: Make Reinforcement Learning Sample Efficient Again =============================================================== This paper proposes an implementation of PSRL via Neural-PSRL: a neural network is trained fit the transition dynamics, with a last-layer linear approximation used to give a posterior distribution. This model can be used in conjunction with a policy gradient algorithm to give an exploratory policy. Evaluated on Mujoco benchmarks this leads to state of the art performance. There are several things to like about this paper: - The problem of extending *efficient exploration* to Deep RL is an important one, and PSRL seems like one of the most suitable general approaches to base an algorithm on. - The proposed algorithm seems reasonable, and seems like it also performs well. - The approach to writing and accessible code release should be praised. However, there are some other places where this paper falls down: - Despite the focus on *incorporating prior knowledge* in the early part of the paper, it seems like the proposed method actually incorporates very little prior knowledge... for example, the *Eluder dimension* of the space represented by these neural networks is presumably huge! This seems like a bit of a bait and switch. - I don*t think the experiments offer a huge amount of *insight* into what is really happening in the algorithm. It would be helpful to show *really good* performance even on a toy domain, so that you can really say that the exploration via PSRL is really working... maybe something like bsuite = https://github.com/deepmind/bsuite and particularly the *deep sea* experiments could help? - It*s a problem that the actual main paper doesn*t give a clear and concise description of the algorithm used. I think this is quite important and it would be much better to put the older results on PSRL to the appendix instead. Minor points: - I*m not sure the title is a great idea... it might put off some people, but also I don*t think it*s really warranted from the results... it seems like Neural-PSRL is a potentially interesting avenue, but hardly making anything great (again?)! - You might want to connect / contrast this work vs the line of research around *Randomized Value Functions* http://jmlr.org/papers/v20/18-339.html... which is another approach that tries to extend something like PSRL to Deep RL. Overall, I do think this is an interesting paper, and a promising avenue for research. However there are probably too many places where the paper could be tightened up: clarity of algorithm description, clarity of support/results, separation of actual claims vs claims of *making great* ;D.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPosterior Sampling: Make Reinforcement Learning Sample Efficient Again\nMachine learning thrives on leveraging structure in data, and many breakthroughs (e.g. convolutional networks) have been made by designing algorithms which exploit the underlying structure of a distribution. Reinforcement Learning agents interact with worlds that are similarly full of structure. For example, no sequence of actions an agent takes will ever cause the laws of physics to change, therefore an agent which learns to generalize such laws through time and space will have an advantage. Sample efficient reinforcement learning can be accomplished when assuming that the world has structure and designing learning algorithms which exploit this assumption without knowing the actual structure beforehand. Posterior Sampling for Reinforcement Learning (PSRL) citep{strens2000bayesian} is such a method which assumes structure in the world and exploits it for learning. A PSLR learning agent first samples models of the environment which conform to both prior assumptions on the world*s structure and past observations and then interacts with the true environment using a policy guided by the sampled model of the environment. While PSRL delivers theoretical Bayesian regret bounds, there are many open issues which must be addressed before PSRL can be applied to current benchmark continuous reinforcement reinforcement tasks. In this work, we identify these issues and find practical solutions to them leading to a novel algorithm we call Neural-PSRL. We validate the algorithm*s effectiveness by achieving state of the art results in the HalfCheetah-v3 and Hopper-v3 domains.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nPosterior Sampling: Make Reinforcement Learning Sample Efficient Again =============================================================== This paper proposes an implementation of PSRL via Neural-PSRL: a neural network is trained fit the transition dynamics, with a last-layer linear approximation used to give a posterior distribution. This model can be used in conjunction with a policy gradient algorithm to give an exploratory policy. Evaluated on Mujoco benchmarks this leads to state of the art performance. There are several things to like about this paper: - The problem of extending *efficient exploration* to Deep RL is an important one, and PSRL seems like one of the most suitable general approaches to base an algorithm on. - The proposed algorithm seems reasonable, and seems like it also performs well. - The approach to writing and accessible code release should be praised. However, there are some other places where this paper falls down: - Despite the focus on *incorporating prior knowledge* in the early part of the paper, it seems like the proposed method actually incorporates very little prior knowledge... for example, the *Eluder dimension* of the space represented by these neural networks is presumably huge! This seems like a bit of a bait and switch. - I don*t think the experiments offer a huge amount of *insight* into what is really happening in the algorithm. It would be helpful to show *really good* performance even on a toy domain, so that you can really say that the exploration via PSRL is really working... maybe something like bsuite = https://github.com/deepmind/bsuite and particularly the *deep sea* experiments could help? - It*s a problem that the actual main paper doesn*t give a clear and concise description of the algorithm used. I think this is quite important and it would be much better to put the older results on PSRL to the appendix instead. Minor points: - I*m not sure the title is a great idea... it might put off some people, but also I don*t think it*s really warranted from the results... it seems like Neural-PSRL is a potentially interesting avenue, but hardly making anything great (again?)! - You might want to connect / contrast this work vs the line of research around *Randomized Value Functions* http://jmlr.org/papers/v20/18-339.html... which is another approach that tries to extend something like PSRL to Deep RL. Overall, I do think this is an interesting paper, and a promising avenue for research. However there are probably too many places where the paper could be tightened up: clarity of algorithm description, clarity of support/results, separation of actual claims vs claims of *making great* ;D.", "role": "assistant" } ]
[ "Posterior Sampling: Make Reinforcement Learning Sample Efficient Again =============================================================== This paper proposes an implementation of PSRL via Neural-PSRL: a neural network is trained fit the transition dynamics, with a last-layer linear approximation used to give a posterior distribution.", "This model can be used in conjunction with a policy gradient algorithm to give an exploratory policy.", "Evaluated on Mujoco benchmarks this leads to state of the art performance.", "There are several things to like about this paper: - The problem of extending *efficient exploration* to Deep RL is an important one, and PSRL seems like one of the most suitable general approaches to base an algorithm on.", "- The proposed algorithm seems reasonable, and seems like it also performs well.", "- The approach to writing and accessible code release should be praised.", "However, there are some other places where this paper falls down: - Despite the focus on *incorporating prior knowledge* in the early part of the paper, it seems like the proposed method actually incorporates very little prior knowledge... for example, the *Eluder dimension* of the space represented by these neural networks is presumably huge!", "This seems like a bit of a bait and switch.", "- I don*t think the experiments offer a huge amount of *insight* into what is really happening in the algorithm.", "It would be helpful to show *really good* performance even on a toy domain, so that you can really say that the exploration via PSRL is really working... maybe something like bsuite = https://github.com/deepmind/bsuite and particularly the *deep sea* experiments could help?", "- It*s a problem that the actual main paper doesn*t give a clear and concise description of the algorithm used.", "I think this is quite important and it would be much better to put the older results on PSRL to the appendix instead.", "Minor points: - I*m not sure the title is a great idea... it might put off some people, but also I don*t think it*s really warranted from the results... it seems like Neural-PSRL is a potentially interesting avenue, but hardly making anything great (again?)!", "- You might want to connect / contrast this work vs the line of research around *Randomized Value Functions* http://jmlr.org/papers/v20/18-339.html... which is another approach that tries to extend something like PSRL to Deep RL.", "Overall, I do think this is an interesting paper, and a promising avenue for research.", "However there are probably too many places where the paper could be tightened up: clarity of algorithm description, clarity of support/results, separation of actual claims vs claims of *making great* ;D." ]
{ "criticism": 5, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 11, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 4, "suggestion_and_solution": 6, "total": 16 }
0.3125
0
0.1875
0.6875
0.1875
0.1875
0.25
0.375
16
1
0
1
1
1
1
1
1
2.1875
2.045458
0.142042
Training-Free Uncertainty Estimation for Neural Networks
Uncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas. However, most state-of-the-art deep learning models either fail to obtain uncertainty estimation or need significant modification (e.g., formulating a proper Bayesian treatment) to obtain it. None of the previous methods are able to take an arbitrary model off the shelf and generate uncertainty estimation without retraining or redesigning it. To address this gap, we perform the first systematic exploration into training-free uncertainty estimation. We propose three simple and scalable methods to analyze the variance of output from a trained network under tolerable perturbations: infer-transformation, infer-noise, and infer-dropout. They operate solely during inference, without the need to re-train, re-design, or fine-tune the model, as typically required by other state-of-the-art uncertainty estimation methods. Surprisingly, even without involving such perturbations in training, our methods produce comparable or even better uncertainty estimation when compared to other training-required state-of-the-art methods. Last but not least, we demonstrate that the uncertainty from our proposed methods can be used to improve the neural network training.
This paper addresses the problem of deriving better predictive uncertainty estimates from pre-trained models. As the paper notes in the introduction, this is an important problem for machine learning models embedded in real-world decision-making systems, where the consequences of inaccurate predictions may be high, and better insight into model uncertainty can help avoid bad outcomes due to misplaced trust in model predictions. The paper focuses on training-free uncertainty estimation, in which uncertainty estimates are driven by input perturbations and modifications to ‘grey-box’ models (such as application of dropout to a trained model). These methods avoid the computational burden of applying uncertainty estimation during the training process. The paper evaluates the methods on two regression tasks: image super-resolution and monocular depth estimation. Evaluation metrics are based on the correlation between predictive variance and loss. The paper concludes that the training-free methods examined give comparable or better performance for uncertainty estimation, compared with state-of-the-art methods applied during training. The paper addresses an important topic, and is generally well-executed and well-written. However, it focuses on a narrow set regression tasks (with little indication that results generalize further), and uses evaluation metrics that are nonstandard and not well-substantiated. For these reasons, I recommend that the paper be rejected. A systematic study of training-free uncertainty estimation, which is the goal of the paper, would be a significant contribution. The paper chooses to focus on regression tasks, which is a valid narrowing of scope (although the implication that uncertainty estimation for classification is a solved problem, via entropy methods, I believe is incorrect). Image super-resolution and monocular depth estimation, the two regression tasks that the paper examines, are important but do not represent a broad enough set of tasks to justify claims about the general performance of the methods (though they enable the observations in equations (1) and (2)). The performance of the methods on these two tasks alone I don’t believe is a significant enough finding to warrant acceptance to ICLR. My second major concern with the paper is in the choice of evaluation metrics, which are based on the correlation between variance of predictions (over a sample of perturbations) and the value of the loss. While I understand how these metrics provide intuition on the quality of uncertainty estimates (i.e. when a model is “more wrong”, ideally it should be more uncertain), the paper does not give a good justification for the validity of the metrics beyond intuition (nor does the cited paper, Zhang et al. (2019)). The correlation metrics obscure a lot of information on the relationship between the predictive distribution and the true values, and without further justification, it is difficult to imagine trusting the resulting uncertainty estimates in a consequential downstream decision-making task. Proper scoring rules (Gneiting and Rafferty (2007), Lakshminarayanan et al. (2017)) could provide a more principled and useful basis for assessing the quality of uncertainty estimates. Smaller points: - The paper makes the implicit assumption that training/test sets are IID. While I understand that prediction on out-of-distribution (OOD) samples is beyond the paper’s scope, high uncertainty on OOD samples is a desirable property for uncertainty estimation methods, and some discussion of this or indication of the methods’ performance on OOD samples would strengthen the paper. - The paper implies that uncertainty estimation via input perturbation is a novel contribution; a similar method was used by Liang et al. (2018; https://arxiv.org/pdf/1706.02690.pdf) for the different but related task of out-of-distribution detection. - Is there a way to derive error bars for the results in Tables 1 and 2? Some of the correlation coefficients, reported to 4 decimal places, are very close and it’s unclear whether the results are significantly different.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining-Free Uncertainty Estimation for Neural Networks\nUncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas. However, most state-of-the-art deep learning models either fail to obtain uncertainty estimation or need significant modification (e.g., formulating a proper Bayesian treatment) to obtain it. None of the previous methods are able to take an arbitrary model off the shelf and generate uncertainty estimation without retraining or redesigning it. To address this gap, we perform the first systematic exploration into training-free uncertainty estimation. We propose three simple and scalable methods to analyze the variance of output from a trained network under tolerable perturbations: infer-transformation, infer-noise, and infer-dropout. They operate solely during inference, without the need to re-train, re-design, or fine-tune the model, as typically required by other state-of-the-art uncertainty estimation methods. Surprisingly, even without involving such perturbations in training, our methods produce comparable or even better uncertainty estimation when compared to other training-required state-of-the-art methods. Last but not least, we demonstrate that the uncertainty from our proposed methods can be used to improve the neural network training.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper addresses the problem of deriving better predictive uncertainty estimates from pre-trained models. As the paper notes in the introduction, this is an important problem for machine learning models embedded in real-world decision-making systems, where the consequences of inaccurate predictions may be high, and better insight into model uncertainty can help avoid bad outcomes due to misplaced trust in model predictions. The paper focuses on training-free uncertainty estimation, in which uncertainty estimates are driven by input perturbations and modifications to ‘grey-box’ models (such as application of dropout to a trained model). These methods avoid the computational burden of applying uncertainty estimation during the training process. The paper evaluates the methods on two regression tasks: image super-resolution and monocular depth estimation. Evaluation metrics are based on the correlation between predictive variance and loss. The paper concludes that the training-free methods examined give comparable or better performance for uncertainty estimation, compared with state-of-the-art methods applied during training. The paper addresses an important topic, and is generally well-executed and well-written. However, it focuses on a narrow set regression tasks (with little indication that results generalize further), and uses evaluation metrics that are nonstandard and not well-substantiated. For these reasons, I recommend that the paper be rejected. A systematic study of training-free uncertainty estimation, which is the goal of the paper, would be a significant contribution. The paper chooses to focus on regression tasks, which is a valid narrowing of scope (although the implication that uncertainty estimation for classification is a solved problem, via entropy methods, I believe is incorrect). Image super-resolution and monocular depth estimation, the two regression tasks that the paper examines, are important but do not represent a broad enough set of tasks to justify claims about the general performance of the methods (though they enable the observations in equations (1) and (2)). The performance of the methods on these two tasks alone I don’t believe is a significant enough finding to warrant acceptance to ICLR. My second major concern with the paper is in the choice of evaluation metrics, which are based on the correlation between variance of predictions (over a sample of perturbations) and the value of the loss. While I understand how these metrics provide intuition on the quality of uncertainty estimates (i.e. when a model is “more wrong”, ideally it should be more uncertain), the paper does not give a good justification for the validity of the metrics beyond intuition (nor does the cited paper, Zhang et al. (2019)). The correlation metrics obscure a lot of information on the relationship between the predictive distribution and the true values, and without further justification, it is difficult to imagine trusting the resulting uncertainty estimates in a consequential downstream decision-making task. Proper scoring rules (Gneiting and Rafferty (2007), Lakshminarayanan et al. (2017)) could provide a more principled and useful basis for assessing the quality of uncertainty estimates. Smaller points: - The paper makes the implicit assumption that training/test sets are IID. While I understand that prediction on out-of-distribution (OOD) samples is beyond the paper’s scope, high uncertainty on OOD samples is a desirable property for uncertainty estimation methods, and some discussion of this or indication of the methods’ performance on OOD samples would strengthen the paper. - The paper implies that uncertainty estimation via input perturbation is a novel contribution; a similar method was used by Liang et al. (2018; https://arxiv.org/pdf/1706.02690.pdf) for the different but related task of out-of-distribution detection. - Is there a way to derive error bars for the results in Tables 1 and 2? Some of the correlation coefficients, reported to 4 decimal places, are very close and it’s unclear whether the results are significantly different.", "role": "assistant" } ]
[ "This paper addresses the problem of deriving better predictive uncertainty estimates from pre-trained models.", "As the paper notes in the introduction, this is an important problem for machine learning models embedded in real-world decision-making systems, where the consequences of inaccurate predictions may be high, and better insight into model uncertainty can help avoid bad outcomes due to misplaced trust in model predictions.", "The paper focuses on training-free uncertainty estimation, in which uncertainty estimates are driven by input perturbations and modifications to ‘grey-box’ models (such as application of dropout to a trained model).", "These methods avoid the computational burden of applying uncertainty estimation during the training process.", "The paper evaluates the methods on two regression tasks: image super-resolution and monocular depth estimation.", "Evaluation metrics are based on the correlation between predictive variance and loss.", "The paper concludes that the training-free methods examined give comparable or better performance for uncertainty estimation, compared with state-of-the-art methods applied during training.", "The paper addresses an important topic, and is generally well-executed and well-written.", "However, it focuses on a narrow set regression tasks (with little indication that results generalize further), and uses evaluation metrics that are nonstandard and not well-substantiated.", "For these reasons, I recommend that the paper be rejected.", "A systematic study of training-free uncertainty estimation, which is the goal of the paper, would be a significant contribution.", "The paper chooses to focus on regression tasks, which is a valid narrowing of scope (although the implication that uncertainty estimation for classification is a solved problem, via entropy methods, I believe is incorrect).", "Image super-resolution and monocular depth estimation, the two regression tasks that the paper examines, are important but do not represent a broad enough set of tasks to justify claims about the general performance of the methods (though they enable the observations in equations (1) and (2)).", "The performance of the methods on these two tasks alone I don’t believe is a significant enough finding to warrant acceptance to ICLR.", "My second major concern with the paper is in the choice of evaluation metrics, which are based on the correlation between variance of predictions (over a sample of perturbations) and the value of the loss.", "While I understand how these metrics provide intuition on the quality of uncertainty estimates (i.e.", "when a model is “more wrong”, ideally it should be more uncertain), the paper does not give a good justification for the validity of the metrics beyond intuition (nor does the cited paper, Zhang et al.", "(2019)).", "The correlation metrics obscure a lot of information on the relationship between the predictive distribution and the true values, and without further justification, it is difficult to imagine trusting the resulting uncertainty estimates in a consequential downstream decision-making task.", "Proper scoring rules (Gneiting and Rafferty (2007), Lakshminarayanan et al.", "(2017)) could provide a more principled and useful basis for assessing the quality of uncertainty estimates.", "Smaller points: - The paper makes the implicit assumption that training/test sets are IID.", "While I understand that prediction on out-of-distribution (OOD) samples is beyond the paper’s scope, high uncertainty on OOD samples is a desirable property for uncertainty estimation methods, and some discussion of this or indication of the methods’ performance on OOD samples would strengthen the paper.", "- The paper implies that uncertainty estimation via input perturbation is a novel contribution; a similar method was used by Liang et al.", "(2018; https://arxiv.org/pdf/1706.02690.pdf) for the different but related task of out-of-distribution detection.", "- Is there a way to derive error bars for the results in Tables 1 and 2?", "Some of the correlation coefficients, reported to 4 decimal places, are very close and it’s unclear whether the results are significantly different." ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 6, "materials_and_methods": 23, "praise": 3, "presentation_and_reporting": 2, "results_and_discussion": 19, "suggestion_and_solution": 4, "total": 27 }
0.259259
0
0.222222
0.851852
0.111111
0.074074
0.703704
0.148148
27
1
0
1
1
1
1
1
1
2.37037
-0.722986
3.093356
Going Deeper with Lean Point Networks
We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks. The novel processing blocks that facilitate efficient information flow are a convolution-type operation block for point sets that blends neighborhood information in a memory-efficient manner; a multi-resolution point cloud processing block; and a crosslink block that efficiently shares information across low- and high-resolution processing branches. Combining these blocks, we design significantly wider and deeper architectures. We extensively evaluate the proposed architectures on multiple point segmentation benchmarks (ShapeNetPart, ScanNet, PartNet) and report systematic improvements in terms of both accuracy and memory consumption by using our generic modules in conjunction with multiple recent architectures (PointNet++, DGCNN, SpiderCNN, PointCNN). We report a 9.7% increase in IoU on the PartNet dataset, which is the most complex, while decreasing memory footprint by 57%.
The paper presents an approach to improve (i) the memory footprint of point processing networks, which has been a main bottleneck for they use, and (ii) their accuracy. Specifically three point processing blocks are presented that enables wider and deeper architectures with improved information sharing. The approach is thoroughly evaluated on point segmentation benchmarks. The paper is very well written and a pleasure to read, even for a reader unfamiliar with point networks. The proposed blocks are clearly and intuitively presented and their impact convincingly demonstrated. For the sake completeness, it would be important to include memory usage and inference time in Table 2, especially as it deals with comparison against Deep CGN, so one can better assess the advantages/disadvantages of both approaches. Timing results would also be important to include in Table 3. Overall, it is a very interesting problem and the proposed approach could have broad applicability.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGoing Deeper with Lean Point Networks\nWe introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks. The novel processing blocks that facilitate efficient information flow are a convolution-type operation block for point sets that blends neighborhood information in a memory-efficient manner; a multi-resolution point cloud processing block; and a crosslink block that efficiently shares information across low- and high-resolution processing branches. Combining these blocks, we design significantly wider and deeper architectures. We extensively evaluate the proposed architectures on multiple point segmentation benchmarks (ShapeNetPart, ScanNet, PartNet) and report systematic improvements in terms of both accuracy and memory consumption by using our generic modules in conjunction with multiple recent architectures (PointNet++, DGCNN, SpiderCNN, PointCNN). We report a 9.7% increase in IoU on the PartNet dataset, which is the most complex, while decreasing memory footprint by 57%.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents an approach to improve (i) the memory footprint of point processing networks, which has been a main bottleneck for they use, and (ii) their accuracy. Specifically three point processing blocks are presented that enables wider and deeper architectures with improved information sharing. The approach is thoroughly evaluated on point segmentation benchmarks. The paper is very well written and a pleasure to read, even for a reader unfamiliar with point networks. The proposed blocks are clearly and intuitively presented and their impact convincingly demonstrated. For the sake completeness, it would be important to include memory usage and inference time in Table 2, especially as it deals with comparison against Deep CGN, so one can better assess the advantages/disadvantages of both approaches. Timing results would also be important to include in Table 3. Overall, it is a very interesting problem and the proposed approach could have broad applicability.", "role": "assistant" } ]
[ "The paper presents an approach to improve (i) the memory footprint of point processing networks, which has been a main bottleneck for they use, and (ii) their accuracy.", "Specifically three point processing blocks are presented that enables wider and deeper architectures with improved information sharing.", "The approach is thoroughly evaluated on point segmentation benchmarks.", "The paper is very well written and a pleasure to read, even for a reader unfamiliar with point networks.", "The proposed blocks are clearly and intuitively presented and their impact convincingly demonstrated.", "For the sake completeness, it would be important to include memory usage and inference time in Table 2, especially as it deals with comparison against Deep CGN, so one can better assess the advantages/disadvantages of both approaches.", "Timing results would also be important to include in Table 3.", "Overall, it is a very interesting problem and the proposed approach could have broad applicability." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 5, "presentation_and_reporting": 4, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 8 }
0
0
0.25
0.625
0.625
0.5
0.25
0.25
8
0
0
1
1
1
1
1
1
2.5
2.105439
0.394561
Variational lower bounds on mutual information based on nonextensive statistical mechanics
This paper aims to address the limitations of mutual information estimators based on variational optimization. By redefining the cost using generalized functions from nonextensive statistical mechanics we raise the upper bound of previous estimators and enable the control of the bias variance trade off. Variational based estimators outperform previous methods especially in high dependence high dimensional scenarios found in machine learning setups. Despite their performance, these estimators either exhibit a high variance or are upper bounded by log(batch size). Our approach inspired by nonextensive statistical mechanics uses different generalizations for the logarithm and the exponential in the partition function. This enables the estimator to capture changes in mutual information over a wider range of dimensions and correlations of the input variables whereas previous estimators saturate them.
This paper tries to improve the limitations of previous variational bounds for mutual information estimation, where it uses generalized functions from nonextensive statistical mechanics to get generalized better bounds for the logarithm and the exponential in the partition function. Experiments on simple synthetic data are performed to show the effectiveness, but these experiments are too simple, more thorough experiments on real data sets should be conducted. There are some confusions in the writing. In Equation (4), the last term is a Monte Carlo sample average by K samples to approximate the partition function Z, so approximate symbol should be used for last term instead of = symbol. Similarly in Equation (5), the last inequality is not really an inequality since this is a Monte Carlo sample average by K samples to approximate the terms in the previous inequality. This is also true for equation or inequality of (6), (18)-(20). In fact this is a problem that appears in Poor and Oord*s papers. On page 2, second line K is the batch size, then the first line after equation (4), K is the number of samples. Please make it consistent. On page 3, the first and second paragraph, it is stated *I_{NWJ} estimator performs well in high dimensional high dependence scenarios but exhibits high variance due to the exponential function in the partition function which amplifies the critic errors. One way to reduce the variance is to make the critic depend on multiple samples and use a Monte Carlo estimator for the partition function.* Please explain why I_{NWJ} estimator exhibits high variance, is it observed by experiments, or it*s been proved mathematically? Similarly why making the critic depend on multiple samples and use a Monte Carlo estimator for the partition function can reduce the variance. I see these kind of statements often appear on papers, I know Poore*s paper provide experimental evidence of the validness, but I don*t see any mathematical proof. The way to cite the references is not right, for example, McAllester & Statos (2018), the correct one is (McAllester & Statos 2018).
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVariational lower bounds on mutual information based on nonextensive statistical mechanics\nThis paper aims to address the limitations of mutual information estimators based on variational optimization. By redefining the cost using generalized functions from nonextensive statistical mechanics we raise the upper bound of previous estimators and enable the control of the bias variance trade off. Variational based estimators outperform previous methods especially in high dependence high dimensional scenarios found in machine learning setups. Despite their performance, these estimators either exhibit a high variance or are upper bounded by log(batch size). Our approach inspired by nonextensive statistical mechanics uses different generalizations for the logarithm and the exponential in the partition function. This enables the estimator to capture changes in mutual information over a wider range of dimensions and correlations of the input variables whereas previous estimators saturate them.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper tries to improve the limitations of previous variational bounds for mutual information estimation, where it uses generalized functions from nonextensive statistical mechanics to get generalized better bounds for the logarithm and the exponential in the partition function. Experiments on simple synthetic data are performed to show the effectiveness, but these experiments are too simple, more thorough experiments on real data sets should be conducted. There are some confusions in the writing. In Equation (4), the last term is a Monte Carlo sample average by K samples to approximate the partition function Z, so approximate symbol should be used for last term instead of = symbol. Similarly in Equation (5), the last inequality is not really an inequality since this is a Monte Carlo sample average by K samples to approximate the terms in the previous inequality. This is also true for equation or inequality of (6), (18)-(20). In fact this is a problem that appears in Poor and Oord*s papers. On page 2, second line K is the batch size, then the first line after equation (4), K is the number of samples. Please make it consistent. On page 3, the first and second paragraph, it is stated *I_{NWJ} estimator performs well in high dimensional high dependence scenarios but exhibits high variance due to the exponential function in the partition function which amplifies the critic errors. One way to reduce the variance is to make the critic depend on multiple samples and use a Monte Carlo estimator for the partition function.* Please explain why I_{NWJ} estimator exhibits high variance, is it observed by experiments, or it*s been proved mathematically? Similarly why making the critic depend on multiple samples and use a Monte Carlo estimator for the partition function can reduce the variance. I see these kind of statements often appear on papers, I know Poore*s paper provide experimental evidence of the validness, but I don*t see any mathematical proof. The way to cite the references is not right, for example, McAllester & Statos (2018), the correct one is (McAllester & Statos 2018).", "role": "assistant" } ]
[ "This paper tries to improve the limitations of previous variational bounds for mutual information estimation, where it uses generalized functions from nonextensive statistical mechanics to get generalized better bounds for the logarithm and the exponential in the partition function.", "Experiments on simple synthetic data are performed to show the effectiveness, but these experiments are too simple, more thorough experiments on real data sets should be conducted.", "There are some confusions in the writing.", "In Equation (4), the last term is a Monte Carlo sample average by K samples to approximate the partition function Z, so approximate symbol should be used for last term instead of = symbol.", "Similarly in Equation (5), the last inequality is not really an inequality since this is a Monte Carlo sample average by K samples to approximate the terms in the previous inequality.", "This is also true for equation or inequality of (6), (18)-(20).", "In fact this is a problem that appears in Poor and Oord*s papers.", "On page 2, second line K is the batch size, then the first line after equation (4), K is the number of samples.", "Please make it consistent.", "On page 3, the first and second paragraph, it is stated *I_{NWJ} estimator performs well in high dimensional high dependence scenarios but exhibits high variance due to the exponential function in the partition function which amplifies the critic errors.", "One way to reduce the variance is to make the critic depend on multiple samples and use a Monte Carlo estimator for the partition function.", "* Please explain why I_{NWJ} estimator exhibits high variance, is it observed by experiments, or it*s been proved mathematically?", "Similarly why making the critic depend on multiple samples and use a Monte Carlo estimator for the partition function can reduce the variance.", "I see these kind of statements often appear on papers, I know Poore*s paper provide experimental evidence of the validness, but I don*t see any mathematical proof.", "The way to cite the references is not right, for example, McAllester & Statos (2018), the correct one is (McAllester & Statos 2018)." ]
{ "criticism": 5, "example": 2, "importance_and_relevance": 0, "materials_and_methods": 9, "praise": 0, "presentation_and_reporting": 5, "results_and_discussion": 6, "suggestion_and_solution": 6, "total": 15 }
0.333333
0.133333
0
0.6
0
0.333333
0.4
0.4
15
1
1
0
1
0
1
1
1
2.2
2.13687
0.06313
Relevant-features based Auxiliary Cells for Robust and Energy Efficient Deep Learning
Deep neural networks are complex non-linear models used as predictive analytics tool and have demonstrated state-of-the-art performance on many classification tasks. However, they have no inherent capability to recognize when their predictions might go wrong. There have been several efforts in the recent past to detect natural errors i.e. misclassified inputs but these mechanisms pose additional energy requirements. To address this issue, we present a novel post-hoc framework to detect natural errors in an energy efficient way. We achieve this by appending relevant features based linear classifiers per class referred as Relevant features based Auxiliary Cells (RACs). The proposed technique makes use of the consensus between RACs appended at few selected hidden layers to distinguish the correctly classified inputs from misclassified inputs. The combined confidence of RACs is utilized to determine if classification should terminate at an early stage. We demonstrate the effectiveness of our technique on various image classification datasets such as CIFAR10, CIFAR100 and Tiny-ImageNet. Our results show that for CIFAR100 dataset trained on VGG16 network, RACs can detect 46% of the misclassified examples along with 12% reduction in energy compared to the baseline network while 69% of the examples are correctly classified.
The authors introduce a mechanism for exiting the inference process early, without doing any more processing, if the network determines that there should be a classification early. The proposed mechanism is: learn separate linear classifiers on the outputs of layers in the convnet and use a decision rule based on agreement of these classifiers (and their confidences) in sequence, sometimes combined with the final output (described in Algorithm 3). The paper is poorly written (there are many simple grammar mistakes, the methods are not well motivated, and the methods are described in a confusing way), and more seriously the evaluation of the method is not complete. Details: The evaluation is not complete. The threat model that the defense is evaluated under is not realistic: it assumes that the attacker does not have knowledge of the model: *we assume that the attacker has no knowledge about the detection mechanism but has complete knowledge about the baseline DNN (white-box attack with respect to baseline DNN)* (page 9). There are a number of mechanisms that are known that work well in this assumed threat model already, and other works have indicated why this is not a realistic threat model, see: [0]. As such the evaluation of adversarial robustness does not stand since there are no best effort, dynamic attacks on the defense presented. [0] https://arxiv.org/abs/1902.06705
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRelevant-features based Auxiliary Cells for Robust and Energy Efficient Deep Learning\nDeep neural networks are complex non-linear models used as predictive analytics tool and have demonstrated state-of-the-art performance on many classification tasks. However, they have no inherent capability to recognize when their predictions might go wrong. There have been several efforts in the recent past to detect natural errors i.e. misclassified inputs but these mechanisms pose additional energy requirements. To address this issue, we present a novel post-hoc framework to detect natural errors in an energy efficient way. We achieve this by appending relevant features based linear classifiers per class referred as Relevant features based Auxiliary Cells (RACs). The proposed technique makes use of the consensus between RACs appended at few selected hidden layers to distinguish the correctly classified inputs from misclassified inputs. The combined confidence of RACs is utilized to determine if classification should terminate at an early stage. We demonstrate the effectiveness of our technique on various image classification datasets such as CIFAR10, CIFAR100 and Tiny-ImageNet. Our results show that for CIFAR100 dataset trained on VGG16 network, RACs can detect 46% of the misclassified examples along with 12% reduction in energy compared to the baseline network while 69% of the examples are correctly classified.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors introduce a mechanism for exiting the inference process early, without doing any more processing, if the network determines that there should be a classification early. The proposed mechanism is: learn separate linear classifiers on the outputs of layers in the convnet and use a decision rule based on agreement of these classifiers (and their confidences) in sequence, sometimes combined with the final output (described in Algorithm 3). The paper is poorly written (there are many simple grammar mistakes, the methods are not well motivated, and the methods are described in a confusing way), and more seriously the evaluation of the method is not complete. Details: The evaluation is not complete. The threat model that the defense is evaluated under is not realistic: it assumes that the attacker does not have knowledge of the model: *we assume that the attacker has no knowledge about the detection mechanism but has complete knowledge about the baseline DNN (white-box attack with respect to baseline DNN)* (page 9). There are a number of mechanisms that are known that work well in this assumed threat model already, and other works have indicated why this is not a realistic threat model, see: [0]. As such the evaluation of adversarial robustness does not stand since there are no best effort, dynamic attacks on the defense presented. [0] https://arxiv.org/abs/1902.06705", "role": "assistant" } ]
[ "The authors introduce a mechanism for exiting the inference process early, without doing any more processing, if the network determines that there should be a classification early.", "The proposed mechanism is: learn separate linear classifiers on the outputs of layers in the convnet and use a decision rule based on agreement of these classifiers (and their confidences) in sequence, sometimes combined with the final output (described in Algorithm 3).", "The paper is poorly written (there are many simple grammar mistakes, the methods are not well motivated, and the methods are described in a confusing way), and more seriously the evaluation of the method is not complete.", "Details: The evaluation is not complete.", "The threat model that the defense is evaluated under is not realistic: it assumes that the attacker does not have knowledge of the model: *we assume that the attacker has no knowledge about the detection mechanism but has complete knowledge about the baseline DNN (white-box attack with respect to baseline DNN)* (page 9).", "There are a number of mechanisms that are known that work well in this assumed threat model already, and other works have indicated why this is not a realistic threat model, see: [0].", "As such the evaluation of adversarial robustness does not stand since there are no best effort, dynamic attacks on the defense presented.", "[0] https://arxiv.org/abs/1902.06705" ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 0, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 8 }
0.5
0.125
0.125
0.875
0
0.125
0.25
0.125
8
1
1
1
1
0
1
1
1
2.125
1.730439
0.394561
Context-Gated Convolution
As the basic building block of Convolutional Neural Networks (CNNs), the convolutional layer is designed to extract local patterns and lacks the ability to model global context in its nature. Many efforts have been recently made to complement CNNs with the global modeling ability, especially by a family of works on global feature interaction. In these works, the global context information is incorporated into local features before they are fed into convolutional layers. However, research on neuroscience reveals that, besides influences changing the inputs to our neurons, the neurons* ability of modifying their functions dynamically according to context is essential for perceptual tasks, which has been overlooked in most of CNNs. Motivated by this, we propose one novel Context-Gated Convolution (CGC) to explicitly modify the weights of convolutional layers adaptively under the guidance of global context. As such, being aware of the global context, the modulated convolution kernel of our proposed CGC can better extract representative local patterns and compose discriminative features. Moreover, our proposed CGC is lightweight, amenable to modern CNN architectures, and consistently improves the performance of CNNs according to extensive experiments on image classification, action recognition, and machine translation.
This paper proposes a novel and interesting idea of modulating convolutional kernels with global context. They use depthwise factorized channel-wise interaction to reduce complexity and parameters. While an interesting idea, I found it hard to see how the global context would usefully modulate the kernels. While Figure 1 illustrates the idea, it is not clear how the modulated kernel is better suited for the task relative to the original kernel. It would be nice if you could show a simple/toy example where your modulation clearly modifies the kernel in a way that is intuitively useful. Overall, I believe the motivation for the method could be made clearer - Why is this the best way to incorporate global context? It is also unclear to me how the proposed network relates to the motivating and cited Neuroscience literature. It is nice that the computational complexity is carefully tracked. The CGC network does seem to converge faster but the benefits to validation accuracy seem modest in Table 1/Fig 3. It would be interesting to further explore what is improving the convergence speed -i.e. to test your supposition that it *improves the model*s generalization ability and the gating mechanism reduces the norm of gradients back-propagated to he convolution kernels ...*. The experiments are well described and extensive and some of them do give some more substantial performance improvements, but overall I think the paper does not explain the motivation for the algorithm sufficiently for ICLR publication.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nContext-Gated Convolution\nAs the basic building block of Convolutional Neural Networks (CNNs), the convolutional layer is designed to extract local patterns and lacks the ability to model global context in its nature. Many efforts have been recently made to complement CNNs with the global modeling ability, especially by a family of works on global feature interaction. In these works, the global context information is incorporated into local features before they are fed into convolutional layers. However, research on neuroscience reveals that, besides influences changing the inputs to our neurons, the neurons* ability of modifying their functions dynamically according to context is essential for perceptual tasks, which has been overlooked in most of CNNs. Motivated by this, we propose one novel Context-Gated Convolution (CGC) to explicitly modify the weights of convolutional layers adaptively under the guidance of global context. As such, being aware of the global context, the modulated convolution kernel of our proposed CGC can better extract representative local patterns and compose discriminative features. Moreover, our proposed CGC is lightweight, amenable to modern CNN architectures, and consistently improves the performance of CNNs according to extensive experiments on image classification, action recognition, and machine translation.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a novel and interesting idea of modulating convolutional kernels with global context. They use depthwise factorized channel-wise interaction to reduce complexity and parameters. While an interesting idea, I found it hard to see how the global context would usefully modulate the kernels. While Figure 1 illustrates the idea, it is not clear how the modulated kernel is better suited for the task relative to the original kernel. It would be nice if you could show a simple/toy example where your modulation clearly modifies the kernel in a way that is intuitively useful. Overall, I believe the motivation for the method could be made clearer - Why is this the best way to incorporate global context? It is also unclear to me how the proposed network relates to the motivating and cited Neuroscience literature. It is nice that the computational complexity is carefully tracked. The CGC network does seem to converge faster but the benefits to validation accuracy seem modest in Table 1/Fig 3. It would be interesting to further explore what is improving the convergence speed -i.e. to test your supposition that it *improves the model*s generalization ability and the gating mechanism reduces the norm of gradients back-propagated to he convolution kernels ...*. The experiments are well described and extensive and some of them do give some more substantial performance improvements, but overall I think the paper does not explain the motivation for the algorithm sufficiently for ICLR publication.", "role": "assistant" } ]
[ "This paper proposes a novel and interesting idea of modulating convolutional kernels with global context.", "They use depthwise factorized channel-wise interaction to reduce complexity and parameters.", "While an interesting idea, I found it hard to see how the global context would usefully modulate the kernels.", "While Figure 1 illustrates the idea, it is not clear how the modulated kernel is better suited for the task relative to the original kernel.", "It would be nice if you could show a simple/toy example where your modulation clearly modifies the kernel in a way that is intuitively useful.", "Overall, I believe the motivation for the method could be made clearer - Why is this the best way to incorporate global context?", "It is also unclear to me how the proposed network relates to the motivating and cited Neuroscience literature.", "It is nice that the computational complexity is carefully tracked.", "The CGC network does seem to converge faster but the benefits to validation accuracy seem modest in Table 1/Fig 3.", "It would be interesting to further explore what is improving the convergence speed -i.e.", "to test your supposition that it *improves the model*s generalization ability and the gating mechanism reduces the norm of gradients back-propagated to he convolution kernels ...*.", "The experiments are well described and extensive and some of them do give some more substantial performance improvements, but overall I think the paper does not explain the motivation for the algorithm sufficiently for ICLR publication." ]
{ "criticism": 5, "example": 2, "importance_and_relevance": 5, "materials_and_methods": 7, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 3, "total": 12 }
0.416667
0.166667
0.416667
0.583333
0.333333
0.166667
0.083333
0.25
12
1
1
1
1
1
1
1
1
2.416667
2.400884
0.015782
Robustness and/or Redundancy Emerge in Overparametrized Deep Neural Networks
Deep neural networks (DNNs) perform well on a variety of tasks despite the fact that most used in practice are vastly overparametrized and even capable of perfectly fitting randomly labeled data. Recent evidence suggests that developing *compressible* representations is key for adjusting the complexity of overparametrized networks to the task at hand and avoiding overfitting (Arora et al., 2018; Zhou et al., 2018). In this paper, we provide new empirical evidence that supports this hypothesis, identifying two independent mechanisms that emerge when the network’s width is increased: robustness (having units that can be removed without affecting accuracy) and redundancy (having units with similar activity). In a series of experiments with AlexNet, ResNet and Inception networks in the CIFAR-10 and ImageNet datasets, and also using shallow networks with synthetic data, we show that DNNs consistently increase either their robustness, their redundancy, or both at greater widths for a comprehensive set of hyperparameters. These results suggest that networks in the deep learning regime adjust their effective capacity by developing either robustness or redundancy.
The authors conduct an empirical study of the representations that deep neural networks learn as a function of model size, as measured by two different metrics: how quickly performance degrades when random neurons are removed (*robustness*), or how similar neurons are (*redundancy*). They show that deep neural networks tend to become more robust and more redundant as model size increases, though this is not necessarily true. In general, I believe that this sort of empirical approach is useful, and I appreciate the authors* efforts to study the phenomena of robustness and redundancy empirically. However, in my opinion, I do not believe that this paper is ready for publication in its current state. My central concern is the current imprecision of the paper, which undermines its scientific contribution. Many terms are ill-defined or fuzzy. For example, in section 2, 6 different capacity-constrained features are enumerated but not well-defined, including *semantically redundant units* that are *units which are not linearly correlated to others and which provide a different representation from the previous one, similarly related to the learning task.* What does it mean to *provide a different representation from the previous one* (and what previous one?), or to be *similarly related to the learning task*? Critically, there is a significant amount of speculation throughout the paper that is not clearly delineated from experimental observation. This speculation involves the above ill-defined terms and/or are not backed up. As non-exhaustive examples: 1) In page 3, robustness (as measured by neuron dropouts) is asserted to aid in distinguishing *redundant functional units* and *semantically redundant units* from other units because of differences in how robustness of those units change with model size; but this is not backed up and as far as I can tell, not experimentally investigated. Similar statements about redundancy and similarity are made in page 3. 2) In page 3, *L is more robust and equally or less redundant than S*: what is a holistic class representation? Why is that less robust than a *bag-like representation*? 3) Section 4 (results) has a lot of speculation as well. For example, *we take this discrepancy between robustness and redundancy trends as strong evidence that these models... are autoregularizing largely by forming qualitatively different representations... with units that are merely semantically redundant.* These terms are not clearly defined, and there is scant evidence to back up these conjectures. As it stands, it is difficult to accept the paper because of the many speculative claims it contains. The experimental findings are interesting and should be reported; and implications and conjectures should be discussed; but they should be carefully separated. I encourage the authors to critically examine the paper and remove or reword claims that are not directly supported by the results. On the experiments: a) For Figure 3, shouldn*t the inceptionv3 and inceptionv3-layer graphs be identical when the model size factor is 1x, since the models are identical? What explains the large discrepancies? b) The number of similar neurons per neuron (e.g., Figs 3c and 4c) should increase with model size just because there are more neurons, right? (Mathematically, if we consider a set of random vectors in , then as increases the number of vectors that correlate with each other should increase.) Is the increase that is experimentally observed any different from this? == Another comment (no need to respond): Experimental procedures should also be well-defined. For example, since neuron ablations are central to the experiments, the procedure should be explained in the main text instead of being buried in a citation.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRobustness and/or Redundancy Emerge in Overparametrized Deep Neural Networks\nDeep neural networks (DNNs) perform well on a variety of tasks despite the fact that most used in practice are vastly overparametrized and even capable of perfectly fitting randomly labeled data. Recent evidence suggests that developing *compressible* representations is key for adjusting the complexity of overparametrized networks to the task at hand and avoiding overfitting (Arora et al., 2018; Zhou et al., 2018). In this paper, we provide new empirical evidence that supports this hypothesis, identifying two independent mechanisms that emerge when the network’s width is increased: robustness (having units that can be removed without affecting accuracy) and redundancy (having units with similar activity). In a series of experiments with AlexNet, ResNet and Inception networks in the CIFAR-10 and ImageNet datasets, and also using shallow networks with synthetic data, we show that DNNs consistently increase either their robustness, their redundancy, or both at greater widths for a comprehensive set of hyperparameters. These results suggest that networks in the deep learning regime adjust their effective capacity by developing either robustness or redundancy.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors conduct an empirical study of the representations that deep neural networks learn as a function of model size, as measured by two different metrics: how quickly performance degrades when random neurons are removed (*robustness*), or how similar neurons are (*redundancy*). They show that deep neural networks tend to become more robust and more redundant as model size increases, though this is not necessarily true. In general, I believe that this sort of empirical approach is useful, and I appreciate the authors* efforts to study the phenomena of robustness and redundancy empirically. However, in my opinion, I do not believe that this paper is ready for publication in its current state. My central concern is the current imprecision of the paper, which undermines its scientific contribution. Many terms are ill-defined or fuzzy. For example, in section 2, 6 different capacity-constrained features are enumerated but not well-defined, including *semantically redundant units* that are *units which are not linearly correlated to others and which provide a different representation from the previous one, similarly related to the learning task.* What does it mean to *provide a different representation from the previous one* (and what previous one?), or to be *similarly related to the learning task*? Critically, there is a significant amount of speculation throughout the paper that is not clearly delineated from experimental observation. This speculation involves the above ill-defined terms and/or are not backed up. As non-exhaustive examples: 1) In page 3, robustness (as measured by neuron dropouts) is asserted to aid in distinguishing *redundant functional units* and *semantically redundant units* from other units because of differences in how robustness of those units change with model size; but this is not backed up and as far as I can tell, not experimentally investigated. Similar statements about redundancy and similarity are made in page 3. 2) In page 3, *L is more robust and equally or less redundant than S*: what is a holistic class representation? Why is that less robust than a *bag-like representation*? 3) Section 4 (results) has a lot of speculation as well. For example, *we take this discrepancy between robustness and redundancy trends as strong evidence that these models... are autoregularizing largely by forming qualitatively different representations... with units that are merely semantically redundant.* These terms are not clearly defined, and there is scant evidence to back up these conjectures. As it stands, it is difficult to accept the paper because of the many speculative claims it contains. The experimental findings are interesting and should be reported; and implications and conjectures should be discussed; but they should be carefully separated. I encourage the authors to critically examine the paper and remove or reword claims that are not directly supported by the results. On the experiments: a) For Figure 3, shouldn*t the inceptionv3 and inceptionv3-layer graphs be identical when the model size factor is 1x, since the models are identical? What explains the large discrepancies? b) The number of similar neurons per neuron (e.g., Figs 3c and 4c) should increase with model size just because there are more neurons, right? (Mathematically, if we consider a set of random vectors in , then as increases the number of vectors that correlate with each other should increase.) Is the increase that is experimentally observed any different from this? == Another comment (no need to respond): Experimental procedures should also be well-defined. For example, since neuron ablations are central to the experiments, the procedure should be explained in the main text instead of being buried in a citation.", "role": "assistant" } ]
[ "The authors conduct an empirical study of the representations that deep neural networks learn as a function of model size, as measured by two different metrics: how quickly performance degrades when random neurons are removed (*robustness*), or how similar neurons are (*redundancy*).", "They show that deep neural networks tend to become more robust and more redundant as model size increases, though this is not necessarily true.", "In general, I believe that this sort of empirical approach is useful, and I appreciate the authors* efforts to study the phenomena of robustness and redundancy empirically.", "However, in my opinion, I do not believe that this paper is ready for publication in its current state.", "My central concern is the current imprecision of the paper, which undermines its scientific contribution.", "Many terms are ill-defined or fuzzy.", "For example, in section 2, 6 different capacity-constrained features are enumerated but not well-defined, including *semantically redundant units* that are *units which are not linearly correlated to others and which provide a different representation from the previous one, similarly related to the learning task.", "* What does it mean to *provide a different representation from the previous one* (and what previous one?", "), or to be *similarly related to the learning task*?", "Critically, there is a significant amount of speculation throughout the paper that is not clearly delineated from experimental observation.", "This speculation involves the above ill-defined terms and/or are not backed up.", "As non-exhaustive examples: 1) In page 3, robustness (as measured by neuron dropouts) is asserted to aid in distinguishing *redundant functional units* and *semantically redundant units* from other units because of differences in how robustness of those units change with model size; but this is not backed up and as far as I can tell, not experimentally investigated.", "Similar statements about redundancy and similarity are made in page 3.", "2) In page 3, *L is more robust and equally or less redundant than S*: what is a holistic class representation?", "Why is that less robust than a *bag-like representation*?", "3) Section 4 (results) has a lot of speculation as well.", "For example, *we take this discrepancy between robustness and redundancy trends as strong evidence that these models... are autoregularizing largely by forming qualitatively different representations... with units that are merely semantically redundant.", "* These terms are not clearly defined, and there is scant evidence to back up these conjectures.", "As it stands, it is difficult to accept the paper because of the many speculative claims it contains.", "The experimental findings are interesting and should be reported; and implications and conjectures should be discussed; but they should be carefully separated.", "I encourage the authors to critically examine the paper and remove or reword claims that are not directly supported by the results.", "On the experiments: a) For Figure 3, shouldn*t the inceptionv3 and inceptionv3-layer graphs be identical when the model size factor is 1x, since the models are identical?", "What explains the large discrepancies?", "b) The number of similar neurons per neuron (e.g., Figs 3c and 4c) should increase with model size just because there are more neurons, right?", "(Mathematically, if we consider a set of random vectors in , then as increases the number of vectors that correlate with each other should increase.)", "Is the increase that is experimentally observed any different from this?", "== Another comment (no need to respond): Experimental procedures should also be well-defined.", "For example, since neuron ablations are central to the experiments, the procedure should be explained in the main text instead of being buried in a citation." ]
{ "criticism": 9, "example": 7, "importance_and_relevance": 3, "materials_and_methods": 15, "praise": 2, "presentation_and_reporting": 12, "results_and_discussion": 7, "suggestion_and_solution": 6, "total": 28 }
0.321429
0.25
0.107143
0.535714
0.071429
0.428571
0.25
0.214286
28
1
1
1
1
1
1
1
1
2.178571
-1.372475
3.551047
POLYNOMIAL ACTIVATION FUNCTIONS
Activation is a nonlinearity function that plays a predominant role in the convergence and performance of deep neural networks. While Rectified Linear Unit (ReLU) is the most successful activation function, its derivatives have shown superior performance on benchmark datasets. In this work, we explore the polynomials as activation functions (order ? 2) that can approximate continuous real valued function within a given interval. Leveraging this property, the main idea is to learn the nonlinearity, accepting that the ensuing function may not be monotonic. While having the ability to learn more suitable nonlinearity, we cannot ignore the fact that it is a challenge to achieve stable performance due to exploding gradients - which is prominent with the increase in order. To handle this issue, we introduce dynamic input scaling, output scaling, and lower learning rate for the polynomial weights. Moreover, lower learning rate will control the abrupt fluctuations of the polynomials between weight updates. In experiments on three public datasets, our proposed method matches the performance of prior activation functions, thus providing insight into a network’s nonlinearity preference.
Motivation: The authors propose the use of polynomials as the choice of activation function. Clarity: The authors does not give a very clear introduction why using polynomial functions as activation functions is a good idea. Rather they say, *, we believe that the nonlinearity learned by the deep networks can provide more insight on how they can be designed.* a bit more intuition in the beginning of the introduction would make the paper a bit easier to read. *We demonstrate the stability of polynomial of orders 2 to 9 by introducing scaling functions and initialization scheme that approximates well known activation functions* Generally, a good technique is to motivate your design decisions before actually stating what those design decisions are. Atleast to me, it was not clear, why you need to introduce scaling functions and initialization scheme in the beginning of the introduction. Scaling Functions: Authors note that the polynomials of order n can explode, to circumvent this authors introduce scaling functions to dynamically scale the inputs. *The purpose of our proposal is to explore newer nonlinearities by allowing the network to choose an appropriate nonlinearity for each layer and eventually, design better activation functions on the gathered intuitions* Having this somewhere in the introduction could help. In general, the paper does not motivate well why use of polynomial activation functions is a good idea.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPOLYNOMIAL ACTIVATION FUNCTIONS\nActivation is a nonlinearity function that plays a predominant role in the convergence and performance of deep neural networks. While Rectified Linear Unit (ReLU) is the most successful activation function, its derivatives have shown superior performance on benchmark datasets. In this work, we explore the polynomials as activation functions (order ? 2) that can approximate continuous real valued function within a given interval. Leveraging this property, the main idea is to learn the nonlinearity, accepting that the ensuing function may not be monotonic. While having the ability to learn more suitable nonlinearity, we cannot ignore the fact that it is a challenge to achieve stable performance due to exploding gradients - which is prominent with the increase in order. To handle this issue, we introduce dynamic input scaling, output scaling, and lower learning rate for the polynomial weights. Moreover, lower learning rate will control the abrupt fluctuations of the polynomials between weight updates. In experiments on three public datasets, our proposed method matches the performance of prior activation functions, thus providing insight into a network’s nonlinearity preference.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nMotivation: The authors propose the use of polynomials as the choice of activation function. Clarity: The authors does not give a very clear introduction why using polynomial functions as activation functions is a good idea. Rather they say, *, we believe that the nonlinearity learned by the deep networks can provide more insight on how they can be designed.* a bit more intuition in the beginning of the introduction would make the paper a bit easier to read. *We demonstrate the stability of polynomial of orders 2 to 9 by introducing scaling functions and initialization scheme that approximates well known activation functions* Generally, a good technique is to motivate your design decisions before actually stating what those design decisions are. Atleast to me, it was not clear, why you need to introduce scaling functions and initialization scheme in the beginning of the introduction. Scaling Functions: Authors note that the polynomials of order n can explode, to circumvent this authors introduce scaling functions to dynamically scale the inputs. *The purpose of our proposal is to explore newer nonlinearities by allowing the network to choose an appropriate nonlinearity for each layer and eventually, design better activation functions on the gathered intuitions* Having this somewhere in the introduction could help. In general, the paper does not motivate well why use of polynomial activation functions is a good idea.", "role": "assistant" } ]
[ "Motivation: The authors propose the use of polynomials as the choice of activation function.", "Clarity: The authors does not give a very clear introduction why using polynomial functions as activation functions is a good idea.", "Rather they say, *, we believe that the nonlinearity learned by the deep networks can provide more insight on how they can be designed.", "* a bit more intuition in the beginning of the introduction would make the paper a bit easier to read.", "*We demonstrate the stability of polynomial of orders 2 to 9 by introducing scaling functions and initialization scheme that approximates well known activation functions* Generally, a good technique is to motivate your design decisions before actually stating what those design decisions are.", "Atleast to me, it was not clear, why you need to introduce scaling functions and initialization scheme in the beginning of the introduction.", "Scaling Functions: Authors note that the polynomials of order n can explode, to circumvent this authors introduce scaling functions to dynamically scale the inputs.", "*The purpose of our proposal is to explore newer nonlinearities by allowing the network to choose an appropriate nonlinearity for each layer and eventually, design better activation functions on the gathered intuitions* Having this somewhere in the introduction could help.", "In general, the paper does not motivate well why use of polynomial activation functions is a good idea." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 9 }
0.333333
0
0.333333
0.555556
0.222222
0.333333
0.333333
0.444444
9
1
0
1
1
1
1
1
1
2.555556
2.303037
0.252519
Noisy $ell^{0}$-Sparse Subspace Clustering on Dimensionality Reduced Data
High-dimensional data often lie in or close to low-dimensional subspaces. Sparse subspace clustering methods with sparsity induced by L0-norm, such as L0-Sparse Subspace Clustering (L0-SSC), are demonstrated to be more effective than its L1 counterpart such as Sparse Subspace Clustering (SSC). However, these L0-norm based subspace clustering methods are restricted to clean data that lie exactly in subspaces. Real data often suffer from noise and they may lie close to subspaces. We propose noisy L0-SSC to handle noisy data so as to improve the robustness. We show that the optimal solution to the optimization problem of noisy L0-SSC achieves subspace detection property (SDP), a key element with which data from different subspaces are separated, under deterministic and randomized models. Our results provide theoretical guarantee on the correctness of noisy L0-SSC in terms of SDP on noisy data. We further propose Noisy-DR-L0-SSC which provably recovers the subspaces on dimensionality reduced data. Noisy-DR-L0-SSC first projects the data onto a lower dimensional space by linear transformation, then performs noisy L0-SSC on the dimensionality reduced data so as to improve the efficiency. The experimental results demonstrate the effectiveness of noisy L0-SSC and Noisy-DR-L0-SSC.
This paper propose noisy -SSC and Noisy-DR- -SSC to handle noisy data and dimensionality reduced data so as to improve the robustness of sparse subspace clustering methods. The authors conduct a complete and detailed theoretical analysis, and the results provide theoretical guarantee on the correctness in terms of SDP on noisy data. Positive: (1) Under both deterministic model and randomized models, this paper presents noisy -SSC , which enhances -SSC by theoretical guarantee on the correctness of clustering on noisy data. (2) Moreover, this paper presents Noisy-DR- -SSC, the method improves the efficiency of the basic algorithm, and provably recovers the underlying subspace structure in the original data from the dimensionality reduced data under deterministic model. Negative: (1) The authors conduct a complete and detailed theoretical analysis, but the description of the algorithm is not clear and the optimization process is not detailed enough. It is very difficult to understand it. (2) This paper proposes a noise model based on Yang et.al (2016), the objective function and the optimization process are already available, therefore, its innovation is not clear in this paper. (3) The results of experiments demonstrate the effectiveness of noisy -SSC and Noisy-DR- -SSC, but this paper not only lacks the description of experimental data, but also lacks the large-scale experimental data. In addition, the experiment is inadequate in this paper, it is suggested to add experiments to analyze the influence of noise ratio on the experimental results. It is hoped that the author will give the parameter analysis and convergence analysis of the algorithm in an experimental way.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNoisy $ell^{0}$-Sparse Subspace Clustering on Dimensionality Reduced Data\nHigh-dimensional data often lie in or close to low-dimensional subspaces. Sparse subspace clustering methods with sparsity induced by L0-norm, such as L0-Sparse Subspace Clustering (L0-SSC), are demonstrated to be more effective than its L1 counterpart such as Sparse Subspace Clustering (SSC). However, these L0-norm based subspace clustering methods are restricted to clean data that lie exactly in subspaces. Real data often suffer from noise and they may lie close to subspaces. We propose noisy L0-SSC to handle noisy data so as to improve the robustness. We show that the optimal solution to the optimization problem of noisy L0-SSC achieves subspace detection property (SDP), a key element with which data from different subspaces are separated, under deterministic and randomized models. Our results provide theoretical guarantee on the correctness of noisy L0-SSC in terms of SDP on noisy data. We further propose Noisy-DR-L0-SSC which provably recovers the subspaces on dimensionality reduced data. Noisy-DR-L0-SSC first projects the data onto a lower dimensional space by linear transformation, then performs noisy L0-SSC on the dimensionality reduced data so as to improve the efficiency. The experimental results demonstrate the effectiveness of noisy L0-SSC and Noisy-DR-L0-SSC.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper propose noisy -SSC and Noisy-DR- -SSC to handle noisy data and dimensionality reduced data so as to improve the robustness of sparse subspace clustering methods. The authors conduct a complete and detailed theoretical analysis, and the results provide theoretical guarantee on the correctness in terms of SDP on noisy data. Positive: (1) Under both deterministic model and randomized models, this paper presents noisy -SSC , which enhances -SSC by theoretical guarantee on the correctness of clustering on noisy data. (2) Moreover, this paper presents Noisy-DR- -SSC, the method improves the efficiency of the basic algorithm, and provably recovers the underlying subspace structure in the original data from the dimensionality reduced data under deterministic model. Negative: (1) The authors conduct a complete and detailed theoretical analysis, but the description of the algorithm is not clear and the optimization process is not detailed enough. It is very difficult to understand it. (2) This paper proposes a noise model based on Yang et.al (2016), the objective function and the optimization process are already available, therefore, its innovation is not clear in this paper. (3) The results of experiments demonstrate the effectiveness of noisy -SSC and Noisy-DR- -SSC, but this paper not only lacks the description of experimental data, but also lacks the large-scale experimental data. In addition, the experiment is inadequate in this paper, it is suggested to add experiments to analyze the influence of noise ratio on the experimental results. It is hoped that the author will give the parameter analysis and convergence analysis of the algorithm in an experimental way.", "role": "assistant" } ]
[ "This paper propose noisy -SSC and Noisy-DR- -SSC to handle noisy data and dimensionality reduced data so as to improve the robustness of sparse subspace clustering methods.", "The authors conduct a complete and detailed theoretical analysis, and the results provide theoretical guarantee on the correctness in terms of SDP on noisy data.", "Positive: (1) Under both deterministic model and randomized models, this paper presents noisy -SSC , which enhances -SSC by theoretical guarantee on the correctness of clustering on noisy data.", "(2) Moreover, this paper presents Noisy-DR- -SSC, the method improves the efficiency of the basic algorithm, and provably recovers the underlying subspace structure in the original data from the dimensionality reduced data under deterministic model.", "Negative: (1) The authors conduct a complete and detailed theoretical analysis, but the description of the algorithm is not clear and the optimization process is not detailed enough.", "It is very difficult to understand it.", "(2) This paper proposes a noise model based on Yang et.al (2016), the objective function and the optimization process are already available, therefore, its innovation is not clear in this paper.", "(3) The results of experiments demonstrate the effectiveness of noisy -SSC and Noisy-DR- -SSC, but this paper not only lacks the description of experimental data, but also lacks the large-scale experimental data.", "In addition, the experiment is inadequate in this paper, it is suggested to add experiments to analyze the influence of noise ratio on the experimental results.", "It is hoped that the author will give the parameter analysis and convergence analysis of the algorithm in an experimental way." ]
{ "criticism": 5, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 9, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 4, "suggestion_and_solution": 1, "total": 10 }
0.5
0
0.1
0.9
0.1
0.3
0.4
0.1
10
1
0
1
1
1
1
1
1
2.4
2.257958
0.142042
Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
The point estimates of ReLU classification networks, arguably the most widely used neural network architecture, have recently been shown to have arbitrarily high confidence far away from the training data. This architecture is thus not robust, e.g., against out-of-distribution data. Approximate Bayesian posteriors on the weight space have been empirically demonstrated to improve predictive uncertainty in deep learning. The theoretical analysis of such Bayesian approximations is limited, including for ReLU classification networks. We present an analysis of approximate Gaussian posterior distributions on the weights of ReLU networks. We show that even a simplistic (thus cheap), non-Bayesian Gaussian distribution fixes the asymptotic overconfidence issue. Furthermore, when a Bayesian method, even if a simple one, is employed to obtain the Gaussian, the confidence becomes better calibrated. This theoretical result motivates a range of Laplace approximations along a fidelity-cost trade-off. We validate these findings empirically via experiments using common deep ReLU networks.
The paper presents the argument that ReLU networks do not have accurate estimates of confidence in areas that are far from the training data. They suggest explicitly estimating the Covariance of the final layer of a network as a way to more accurately estimate the confidence. They provide theoretical results that bound the logit predictions and use this to choose their covariance matrix. The clarity of the paper is not great. They do not discuss the relevance of many of their theoretical results and instead present them in a large list. The experiments are limited, but the results are promising, particularly Figure 3. Overall the paper feels like it has a lot of content without a lot of substance. This may be because they spent a lot of time listing all their theorems, but not as much motivating why these theoretical results add novelty and are important. Many of their theoretical results are rather obvious (and use well-studied techniques), but they have not been presented explicitly for neural networks.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBeing Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks\nThe point estimates of ReLU classification networks, arguably the most widely used neural network architecture, have recently been shown to have arbitrarily high confidence far away from the training data. This architecture is thus not robust, e.g., against out-of-distribution data. Approximate Bayesian posteriors on the weight space have been empirically demonstrated to improve predictive uncertainty in deep learning. The theoretical analysis of such Bayesian approximations is limited, including for ReLU classification networks. We present an analysis of approximate Gaussian posterior distributions on the weights of ReLU networks. We show that even a simplistic (thus cheap), non-Bayesian Gaussian distribution fixes the asymptotic overconfidence issue. Furthermore, when a Bayesian method, even if a simple one, is employed to obtain the Gaussian, the confidence becomes better calibrated. This theoretical result motivates a range of Laplace approximations along a fidelity-cost trade-off. We validate these findings empirically via experiments using common deep ReLU networks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents the argument that ReLU networks do not have accurate estimates of confidence in areas that are far from the training data. They suggest explicitly estimating the Covariance of the final layer of a network as a way to more accurately estimate the confidence. They provide theoretical results that bound the logit predictions and use this to choose their covariance matrix. The clarity of the paper is not great. They do not discuss the relevance of many of their theoretical results and instead present them in a large list. The experiments are limited, but the results are promising, particularly Figure 3. Overall the paper feels like it has a lot of content without a lot of substance. This may be because they spent a lot of time listing all their theorems, but not as much motivating why these theoretical results add novelty and are important. Many of their theoretical results are rather obvious (and use well-studied techniques), but they have not been presented explicitly for neural networks.", "role": "assistant" } ]
[ "The paper presents the argument that ReLU networks do not have accurate estimates of confidence in areas that are far from the training data.", "They suggest explicitly estimating the Covariance of the final layer of a network as a way to more accurately estimate the confidence.", "They provide theoretical results that bound the logit predictions and use this to choose their covariance matrix.", "The clarity of the paper is not great.", "They do not discuss the relevance of many of their theoretical results and instead present them in a large list.", "The experiments are limited, but the results are promising, particularly Figure 3.", "Overall the paper feels like it has a lot of content without a lot of substance.", "This may be because they spent a lot of time listing all their theorems, but not as much motivating why these theoretical results add novelty and are important.", "Many of their theoretical results are rather obvious (and use well-studied techniques), but they have not been presented explicitly for neural networks." ]
{ "criticism": 6, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 7, "suggestion_and_solution": 1, "total": 9 }
0.666667
0
0.222222
0.555556
0.222222
0.111111
0.777778
0.111111
9
1
0
1
1
1
1
1
1
2.666667
2.414148
0.252519
Residual EBMs: Does Real vs. Fake Text Discrimination Generalize?
Energy-based models (EBMs), a.k.a. un-normalized models, have had recent successes in continuous spaces. However, they have not been successfully applied to model text sequences. While decreasing the energy at training samples is straightforward, mining (negative) samples where the energy should be increased is difficult. In part, this is because standard gradient-based methods are not readily applicable when the input is high-dimensional and discrete. Here, we side-step this issue by generating negatives using pre-trained auto-regressive language models. The EBM then works in the {em residual} of the language model; and is trained to discriminate real text from text generated by the auto-regressive models. We investigate the generalization ability of residual EBMs, a pre-requisite for using them in other applications. We extensively analyze generalization for the task of classifying whether an input is machine or human generated, a natural task given the training loss and how we mine negatives. Overall, we observe that EBMs can generalize remarkably well to changes in the architecture of the generators producing negatives. However, EBMs exhibit more sensitivity to the training set used by such generators.
The authors consider energy-based learning for discriminating real vs machine generated text. They explore the approach of using a generative text model to generate machine generated training examples. They then consider generalization when the generative model in test time is different from the generative model used for training, as well as if the training data changes to a different corpus. They observe that their model is reasonably able to generalize when the generative model or training data is changed. The experiments in this work seem fine. My main issue is with the theory, specifically if their approach to using an energy function is truly different from supervised learning. It is not really clear why an energy model is useful for the tasks they consider in the paper, as it essentially seems to be a binary classification problem. Additionally, is it fair to say that the BCE loss used is essentially the classic supervised loss function, modulo using the most offending negative? If we model the probability the text is real as exp(-E(x)) / (exp(-E(x)) + 1), then we get the classic supervised loss function (again modulo using the most offending negative). To me, an energy based approach would be to model P(x) = exp(-E(x)) / Z, where Z is a normalization constant. I would calculate the log likelihood as -E(x) - log(Z), which agrees with LeCun, 2006 and Du and Mordatch, 2019. The authors need to do a better job explaining why they consider this an energy based approach and relate it to more typical energy based approaches. Additionally, a bit of text describing why using the most offending negative would be helpful. An additional experiment that would be interesting is to train the model using many different generative text models and test on an unused generative text model. I would also like to see a more formal definition of *residual*.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nResidual EBMs: Does Real vs. Fake Text Discrimination Generalize?\nEnergy-based models (EBMs), a.k.a. un-normalized models, have had recent successes in continuous spaces. However, they have not been successfully applied to model text sequences. While decreasing the energy at training samples is straightforward, mining (negative) samples where the energy should be increased is difficult. In part, this is because standard gradient-based methods are not readily applicable when the input is high-dimensional and discrete. Here, we side-step this issue by generating negatives using pre-trained auto-regressive language models. The EBM then works in the {em residual} of the language model; and is trained to discriminate real text from text generated by the auto-regressive models. We investigate the generalization ability of residual EBMs, a pre-requisite for using them in other applications. We extensively analyze generalization for the task of classifying whether an input is machine or human generated, a natural task given the training loss and how we mine negatives. Overall, we observe that EBMs can generalize remarkably well to changes in the architecture of the generators producing negatives. However, EBMs exhibit more sensitivity to the training set used by such generators.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors consider energy-based learning for discriminating real vs machine generated text. They explore the approach of using a generative text model to generate machine generated training examples. They then consider generalization when the generative model in test time is different from the generative model used for training, as well as if the training data changes to a different corpus. They observe that their model is reasonably able to generalize when the generative model or training data is changed. The experiments in this work seem fine. My main issue is with the theory, specifically if their approach to using an energy function is truly different from supervised learning. It is not really clear why an energy model is useful for the tasks they consider in the paper, as it essentially seems to be a binary classification problem. Additionally, is it fair to say that the BCE loss used is essentially the classic supervised loss function, modulo using the most offending negative? If we model the probability the text is real as exp(-E(x)) / (exp(-E(x)) + 1), then we get the classic supervised loss function (again modulo using the most offending negative). To me, an energy based approach would be to model P(x) = exp(-E(x)) / Z, where Z is a normalization constant. I would calculate the log likelihood as -E(x) - log(Z), which agrees with LeCun, 2006 and Du and Mordatch, 2019. The authors need to do a better job explaining why they consider this an energy based approach and relate it to more typical energy based approaches. Additionally, a bit of text describing why using the most offending negative would be helpful. An additional experiment that would be interesting is to train the model using many different generative text models and test on an unused generative text model. I would also like to see a more formal definition of *residual*.", "role": "assistant" } ]
[ "The authors consider energy-based learning for discriminating real vs machine generated text.", "They explore the approach of using a generative text model to generate machine generated training examples.", "They then consider generalization when the generative model in test time is different from the generative model used for training, as well as if the training data changes to a different corpus.", "They observe that their model is reasonably able to generalize when the generative model or training data is changed.", "The experiments in this work seem fine.", "My main issue is with the theory, specifically if their approach to using an energy function is truly different from supervised learning.", "It is not really clear why an energy model is useful for the tasks they consider in the paper, as it essentially seems to be a binary classification problem.", "Additionally, is it fair to say that the BCE loss used is essentially the classic supervised loss function, modulo using the most offending negative?", "If we model the probability the text is real as exp(-E(x)) / (exp(-E(x)) + 1), then we get the classic supervised loss function (again modulo using the most offending negative).", "To me, an energy based approach would be to model P(x) = exp(-E(x)) / Z, where Z is a normalization constant.", "I would calculate the log likelihood as -E(x) - log(Z), which agrees with LeCun, 2006 and Du and Mordatch, 2019.", "The authors need to do a better job explaining why they consider this an energy based approach and relate it to more typical energy based approaches.", "Additionally, a bit of text describing why using the most offending negative would be helpful.", "An additional experiment that would be interesting is to train the model using many different generative text models and test on an unused generative text model.", "I would also like to see a more formal definition of *residual*." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 13, "praise": 1, "presentation_and_reporting": 7, "results_and_discussion": 2, "suggestion_and_solution": 5, "total": 15 }
0.066667
0
0.2
0.866667
0.066667
0.466667
0.133333
0.333333
15
1
0
1
1
1
1
1
1
2.133333
2.070204
0.06313
Residual EBMs: Does Real vs. Fake Text Discrimination Generalize?
Energy-based models (EBMs), a.k.a. un-normalized models, have had recent successes in continuous spaces. However, they have not been successfully applied to model text sequences. While decreasing the energy at training samples is straightforward, mining (negative) samples where the energy should be increased is difficult. In part, this is because standard gradient-based methods are not readily applicable when the input is high-dimensional and discrete. Here, we side-step this issue by generating negatives using pre-trained auto-regressive language models. The EBM then works in the {em residual} of the language model; and is trained to discriminate real text from text generated by the auto-regressive models. We investigate the generalization ability of residual EBMs, a pre-requisite for using them in other applications. We extensively analyze generalization for the task of classifying whether an input is machine or human generated, a natural task given the training loss and how we mine negatives. Overall, we observe that EBMs can generalize remarkably well to changes in the architecture of the generators producing negatives. However, EBMs exhibit more sensitivity to the training set used by such generators.
The paper discusses training an energy-based discriminator to discriminates the generated examples from a language model from human-written sentences. To do this, it assigns lower energy value to the real text and higher energy values to the generated text, by optimizing binary cross-entropy. The motivation of the paper claims that the authors solve (or at least move toward solving) the problem of sampling from energy-based models for high-dimensional sequence problems; however, I do not see any contribution towards that. The importance of having a well-trained energy-based model is its modeling capability, which also enables us to sample from it (that is done using gradient descent and Langevin dynamic for modeling images). But here, the energy-based model has no effect on the quality of samples from the auto-regressive model, so it does not give us a better model for text. It would be similar to EBGAN (Zhao et al., 2017), but only training the energy discriminator and keeping the generator as is (here a pretrained LM). This work would become an interesting and influential work if you can update the generator parameters from the feedback of the energy-based model, so the generator works as a sampling process for the energy-based model (which was what I expected while I was reading the first part of the paper).
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nResidual EBMs: Does Real vs. Fake Text Discrimination Generalize?\nEnergy-based models (EBMs), a.k.a. un-normalized models, have had recent successes in continuous spaces. However, they have not been successfully applied to model text sequences. While decreasing the energy at training samples is straightforward, mining (negative) samples where the energy should be increased is difficult. In part, this is because standard gradient-based methods are not readily applicable when the input is high-dimensional and discrete. Here, we side-step this issue by generating negatives using pre-trained auto-regressive language models. The EBM then works in the {em residual} of the language model; and is trained to discriminate real text from text generated by the auto-regressive models. We investigate the generalization ability of residual EBMs, a pre-requisite for using them in other applications. We extensively analyze generalization for the task of classifying whether an input is machine or human generated, a natural task given the training loss and how we mine negatives. Overall, we observe that EBMs can generalize remarkably well to changes in the architecture of the generators producing negatives. However, EBMs exhibit more sensitivity to the training set used by such generators.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper discusses training an energy-based discriminator to discriminates the generated examples from a language model from human-written sentences. To do this, it assigns lower energy value to the real text and higher energy values to the generated text, by optimizing binary cross-entropy. The motivation of the paper claims that the authors solve (or at least move toward solving) the problem of sampling from energy-based models for high-dimensional sequence problems; however, I do not see any contribution towards that. The importance of having a well-trained energy-based model is its modeling capability, which also enables us to sample from it (that is done using gradient descent and Langevin dynamic for modeling images). But here, the energy-based model has no effect on the quality of samples from the auto-regressive model, so it does not give us a better model for text. It would be similar to EBGAN (Zhao et al., 2017), but only training the energy discriminator and keeping the generator as is (here a pretrained LM). This work would become an interesting and influential work if you can update the generator parameters from the feedback of the energy-based model, so the generator works as a sampling process for the energy-based model (which was what I expected while I was reading the first part of the paper).", "role": "assistant" } ]
[ "The paper discusses training an energy-based discriminator to discriminates the generated examples from a language model from human-written sentences.", "To do this, it assigns lower energy value to the real text and higher energy values to the generated text, by optimizing binary cross-entropy.", "The motivation of the paper claims that the authors solve (or at least move toward solving) the problem of sampling from energy-based models for high-dimensional sequence problems; however, I do not see any contribution towards that.", "The importance of having a well-trained energy-based model is its modeling capability, which also enables us to sample from it (that is done using gradient descent and Langevin dynamic for modeling images).", "But here, the energy-based model has no effect on the quality of samples from the auto-regressive model, so it does not give us a better model for text.", "It would be similar to EBGAN (Zhao et al., 2017), but only training the energy discriminator and keeping the generator as is (here a pretrained LM).", "This work would become an interesting and influential work if you can update the generator parameters from the feedback of the energy-based model, so the generator works as a sampling process for the energy-based model (which was what I expected while I was reading the first part of the paper)." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 7 }
0.285714
0
0.285714
0.857143
0.142857
0.428571
0.285714
0.142857
7
1
0
1
1
1
1
1
1
2.428571
1.860404
0.568167
Recognizing Plans by Learning Embeddings from Observed Action Distributions
Plan recognition aims to look for target plans to best explain the observed actions based on plan libraries and/or domain models. Despite the success of previous approaches on plan recognition, they mostly rely on correct action observations. Recent advances in visual activity recognition have the potential of enabling applications such as automated video surveillance. Effective approaches for such problems would require the ability to recognize the plans of agents from video information. Traditional plan recognition algorithms rely on access to detailed planning domain models. One recent promising direction involves learning approximate (or shallow) domain models directly from the observed activity sequences. Such plan recognition approaches expect observed action sequences as inputs. However, visual inference results are often noisy and uncertain, typically represented as a distribution over possible actions. In this work, we develop a visual plan recognition framework that recognizes plans with an approximate domain model learned from uncertain visual data.
The paper presents an approach to unsupervised plan recognition from video, which can take into account the uncertainty of actions that have been recognized automatically from video, in this case the 50 salads database. This probabilistic approach seems plausible, but I would honestly be surprised if something like this has not been attempted before. The paper contains very little discussion of the relevance of the results to the state of the art and about other related techniques, making it hard for someone not from the field to judge the contribution of this work. The proposed approach seems straightforward and reasonable. I am not sure if the 50 salads database is not maybe considered a toy task, and how the results presented here compare to other work - this should definitely be addressed before publication. Finally, the paper is not well formatted, and most images have very small captions (font size), making it hard to read.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRecognizing Plans by Learning Embeddings from Observed Action Distributions\nPlan recognition aims to look for target plans to best explain the observed actions based on plan libraries and/or domain models. Despite the success of previous approaches on plan recognition, they mostly rely on correct action observations. Recent advances in visual activity recognition have the potential of enabling applications such as automated video surveillance. Effective approaches for such problems would require the ability to recognize the plans of agents from video information. Traditional plan recognition algorithms rely on access to detailed planning domain models. One recent promising direction involves learning approximate (or shallow) domain models directly from the observed activity sequences. Such plan recognition approaches expect observed action sequences as inputs. However, visual inference results are often noisy and uncertain, typically represented as a distribution over possible actions. In this work, we develop a visual plan recognition framework that recognizes plans with an approximate domain model learned from uncertain visual data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents an approach to unsupervised plan recognition from video, which can take into account the uncertainty of actions that have been recognized automatically from video, in this case the 50 salads database. This probabilistic approach seems plausible, but I would honestly be surprised if something like this has not been attempted before. The paper contains very little discussion of the relevance of the results to the state of the art and about other related techniques, making it hard for someone not from the field to judge the contribution of this work. The proposed approach seems straightforward and reasonable. I am not sure if the 50 salads database is not maybe considered a toy task, and how the results presented here compare to other work - this should definitely be addressed before publication. Finally, the paper is not well formatted, and most images have very small captions (font size), making it hard to read.", "role": "assistant" } ]
[ "The paper presents an approach to unsupervised plan recognition from video, which can take into account the uncertainty of actions that have been recognized automatically from video, in this case the 50 salads database.", "This probabilistic approach seems plausible, but I would honestly be surprised if something like this has not been attempted before.", "The paper contains very little discussion of the relevance of the results to the state of the art and about other related techniques, making it hard for someone not from the field to judge the contribution of this work.", "The proposed approach seems straightforward and reasonable.", "I am not sure if the 50 salads database is not maybe considered a toy task, and how the results presented here compare to other work - this should definitely be addressed before publication.", "Finally, the paper is not well formatted, and most images have very small captions (font size), making it hard to read." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 3, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 6 }
0.5
0
0.333333
0.5
0.166667
0.166667
0.333333
0.166667
6
1
0
1
1
1
1
1
1
2.166667
1.393328
0.773339
Random Partition Relaxation for Training Binary and Ternary Weight Neural Network
We present Random Partition Relaxation (RPR), a method for strong quantization of the parameters of convolutional neural networks to binary (+1/-1) and ternary (+1/0/-1) values. Starting from a pretrained model, we first quantize the weights and then relax random partitions of them to their continuous values for retraining before quantizing them again and switching to another weight partition for further adaptation. We empirically evaluate the performance of RPR with ResNet-18, ResNet-50 and GoogLeNet on the ImageNet classification task for binary and ternary weight networks. We show accuracies beyond the state-of-the-art for binary- and ternary-weight GoogLeNet and competitive performance for ResNet-18 and ResNet-50 using a SGD-based training method that can easily be integrated into existing frameworks.
Authors propose RPR - a way to randomly partition and quantize weights and train the remaining parameters followed by relaxation in alternate cycles to train quantized models. The authors show the results on latest network architectures on ImageNet dataset - ResNet18, ResNet-50 and GoogLeNet and the results either compare to other quantization methodologies and in the case of GoogLeNet they surpass the state of the art ADMM method by 1.7% top-1 accuracy The current experimental section does not have any runtime or model size numbers, it would be great if authors could talk about that and maybe add that information to get a more apples to apples comparison on not just accuracy but other factors. Also, while Figure1 is a useful visualization of how weights change in the RPR method what is missing in the paper is a more technical/mathematical explanation of the procedure as well as detailed discussion on the hyper-parameters and how things change if we choose a different ratio for the partitioning. As it is, the paper currently is easy to read, seems like a good idea and has good results but I feel it is important to add more detail in both methodology and in experimental results especially have more ablations and understand what is truly happening to the model.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRandom Partition Relaxation for Training Binary and Ternary Weight Neural Network\nWe present Random Partition Relaxation (RPR), a method for strong quantization of the parameters of convolutional neural networks to binary (+1/-1) and ternary (+1/0/-1) values. Starting from a pretrained model, we first quantize the weights and then relax random partitions of them to their continuous values for retraining before quantizing them again and switching to another weight partition for further adaptation. We empirically evaluate the performance of RPR with ResNet-18, ResNet-50 and GoogLeNet on the ImageNet classification task for binary and ternary weight networks. We show accuracies beyond the state-of-the-art for binary- and ternary-weight GoogLeNet and competitive performance for ResNet-18 and ResNet-50 using a SGD-based training method that can easily be integrated into existing frameworks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nAuthors propose RPR - a way to randomly partition and quantize weights and train the remaining parameters followed by relaxation in alternate cycles to train quantized models. The authors show the results on latest network architectures on ImageNet dataset - ResNet18, ResNet-50 and GoogLeNet and the results either compare to other quantization methodologies and in the case of GoogLeNet they surpass the state of the art ADMM method by 1.7% top-1 accuracy The current experimental section does not have any runtime or model size numbers, it would be great if authors could talk about that and maybe add that information to get a more apples to apples comparison on not just accuracy but other factors. Also, while Figure1 is a useful visualization of how weights change in the RPR method what is missing in the paper is a more technical/mathematical explanation of the procedure as well as detailed discussion on the hyper-parameters and how things change if we choose a different ratio for the partitioning. As it is, the paper currently is easy to read, seems like a good idea and has good results but I feel it is important to add more detail in both methodology and in experimental results especially have more ablations and understand what is truly happening to the model.", "role": "assistant" } ]
[ "Authors propose RPR - a way to randomly partition and quantize weights and train the remaining parameters followed by relaxation in alternate cycles to train quantized models.", "The authors show the results on latest network architectures on ImageNet dataset - ResNet18, ResNet-50 and GoogLeNet and the results either compare to other quantization methodologies and in the case of GoogLeNet they surpass the state of the art ADMM method by 1.7% top-1 accuracy The current experimental section does not have any runtime or model size numbers, it would be great if authors could talk about that and maybe add that information to get a more apples to apples comparison on not just accuracy but other factors.", "Also, while Figure1 is a useful visualization of how weights change in the RPR method what is missing in the paper is a more technical/mathematical explanation of the procedure as well as detailed discussion on the hyper-parameters and how things change if we choose a different ratio for the partitioning.", "As it is, the paper currently is easy to read, seems like a good idea and has good results but I feel it is important to add more detail in both methodology and in experimental results especially have more ablations and understand what is truly happening to the model." ]
{ "criticism": 0, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 4, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 4 }
0
0.25
0.25
1
0.5
0.75
0.75
0.75
4
0
1
1
1
1
1
1
1
4.25
2.971623
1.278377
Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation
Natural question generation (QG) aims to generate questions from a passage and an answer. Previous works on QG either (i) ignore the rich structure information hidden in text, (ii) solely rely on cross-entropy loss that leads to issues like exposure bias and inconsistency between train/test measurement, or (iii) fail to fully exploit the answer information. To address these limitations, in this paper, we propose a reinforcement learning (RL) based graph-to sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective function that combines both the cross-entropy and RL loss to ensure the generation of syntactically and semantically valid text. We also introduce an effective Deep Alignment Network for incorporating the answer information into the passage at both the word and contextual level. Our model is end-to-end trainable and achieves new state-of-the-art scores, outperforming existing methods by a significant margin on the standard SQuAD benchmark for QG.
This paper focuses on improving the performance on the task of natural language generation. To this end, they propose a graph-to-sequence (Graph2Seq) model for the task of question generation which exploits the rich structure information in the text as well as use reinforcement learning based policy gradient approach to address the exposure bias and inconsistency between test/train distributions in cross-entropy optimization setup. The Graph2Seq model has a bidirectional gated graph neural network on the encoder side, which is an extension of traditional gated graph neural network. To exploit the rich hidden structure information in the input text, they explore two different methods: (1) syntax-based static graph; (2) semantics-aware dynamics graph. The proposed model achieves state-of-the-art results on question generation, which are further validated with human evaluations. Overall, The paper should be rejected because the paper have minor extensions to each of their modules but lacking any major important contribution. Some major concerns: 1) The bidirectional gated GNN doesn’t seem novel enough in comparison to previous work 2) I believe RL to Graph2Seq is a minor extension from Seq2Seq, since RL mostly deals with the decoder part which is common in across both Graph2Seq and Seq2Seq Arguments: 1) Adding the structure information to the encoder via the GNNs is an interesting angle for question generation. Compared to previous work, this paper proposes an additional deep alignment network on the encoder side to align paragraph and answer. However, the importance of this module is not well studied in the experiments section. I see that there is an ablation with/without this module but its not fairly compared with other aligning or simple techniques like in Zhao et al. (2018). 2) The addition of RL component to Graph2Seq is a minor extension from the Seq2Seq model, because both of these models have similar decoder and RL mainly deals with it. Also the importance of each reward component or the effect of each phrase-matching automatic metrics is missing. 3) Open part I am unclear about the dataset is which dataset version did you use sentence-level or paragraph level? I see that the baselines correspond to sentence-level, but the Figure-1 alignment module has input paragraph. Also I couldn’t find the SeqCopyNet (Zhou et al., 2018) split-2 BLEU4 score=13.02 in the original paper! 4) Some of the latest papers which use BERT based models are not discussed in the paper which achieve state-of-the-art-results: “Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering” 5) For Table-2 results are the differences in the scores for the two models statistically significant? 6) Table-3: First of all, evaluating only one metric is no sufficient. Please see latest papers that have also introduced new metrics that are good for QG evaluation, e.g., Q-BLEU. The gap between G2Ssta+BERT vs. G2Ssta+BERT+RL seems negligible, and missing statistical significance. 7) Minor comments: BLUE -> BLEU; please cite KNN-style graph sparsification; the color choices in Figure-3 are creating confusion in understanding the model.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nReinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation\nNatural question generation (QG) aims to generate questions from a passage and an answer. Previous works on QG either (i) ignore the rich structure information hidden in text, (ii) solely rely on cross-entropy loss that leads to issues like exposure bias and inconsistency between train/test measurement, or (iii) fail to fully exploit the answer information. To address these limitations, in this paper, we propose a reinforcement learning (RL) based graph-to sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective function that combines both the cross-entropy and RL loss to ensure the generation of syntactically and semantically valid text. We also introduce an effective Deep Alignment Network for incorporating the answer information into the passage at both the word and contextual level. Our model is end-to-end trainable and achieves new state-of-the-art scores, outperforming existing methods by a significant margin on the standard SQuAD benchmark for QG.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper focuses on improving the performance on the task of natural language generation. To this end, they propose a graph-to-sequence (Graph2Seq) model for the task of question generation which exploits the rich structure information in the text as well as use reinforcement learning based policy gradient approach to address the exposure bias and inconsistency between test/train distributions in cross-entropy optimization setup. The Graph2Seq model has a bidirectional gated graph neural network on the encoder side, which is an extension of traditional gated graph neural network. To exploit the rich hidden structure information in the input text, they explore two different methods: (1) syntax-based static graph; (2) semantics-aware dynamics graph. The proposed model achieves state-of-the-art results on question generation, which are further validated with human evaluations. Overall, The paper should be rejected because the paper have minor extensions to each of their modules but lacking any major important contribution. Some major concerns: 1) The bidirectional gated GNN doesn’t seem novel enough in comparison to previous work 2) I believe RL to Graph2Seq is a minor extension from Seq2Seq, since RL mostly deals with the decoder part which is common in across both Graph2Seq and Seq2Seq Arguments: 1) Adding the structure information to the encoder via the GNNs is an interesting angle for question generation. Compared to previous work, this paper proposes an additional deep alignment network on the encoder side to align paragraph and answer. However, the importance of this module is not well studied in the experiments section. I see that there is an ablation with/without this module but its not fairly compared with other aligning or simple techniques like in Zhao et al. (2018). 2) The addition of RL component to Graph2Seq is a minor extension from the Seq2Seq model, because both of these models have similar decoder and RL mainly deals with it. Also the importance of each reward component or the effect of each phrase-matching automatic metrics is missing. 3) Open part I am unclear about the dataset is which dataset version did you use sentence-level or paragraph level? I see that the baselines correspond to sentence-level, but the Figure-1 alignment module has input paragraph. Also I couldn’t find the SeqCopyNet (Zhou et al., 2018) split-2 BLEU4 score=13.02 in the original paper! 4) Some of the latest papers which use BERT based models are not discussed in the paper which achieve state-of-the-art-results: “Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering” 5) For Table-2 results are the differences in the scores for the two models statistically significant? 6) Table-3: First of all, evaluating only one metric is no sufficient. Please see latest papers that have also introduced new metrics that are good for QG evaluation, e.g., Q-BLEU. The gap between G2Ssta+BERT vs. G2Ssta+BERT+RL seems negligible, and missing statistical significance. 7) Minor comments: BLUE -> BLEU; please cite KNN-style graph sparsification; the color choices in Figure-3 are creating confusion in understanding the model.", "role": "assistant" } ]
[ "This paper focuses on improving the performance on the task of natural language generation.", "To this end, they propose a graph-to-sequence (Graph2Seq) model for the task of question generation which exploits the rich structure information in the text as well as use reinforcement learning based policy gradient approach to address the exposure bias and inconsistency between test/train distributions in cross-entropy optimization setup.", "The Graph2Seq model has a bidirectional gated graph neural network on the encoder side, which is an extension of traditional gated graph neural network.", "To exploit the rich hidden structure information in the input text, they explore two different methods: (1) syntax-based static graph; (2) semantics-aware dynamics graph.", "The proposed model achieves state-of-the-art results on question generation, which are further validated with human evaluations.", "Overall, The paper should be rejected because the paper have minor extensions to each of their modules but lacking any major important contribution.", "Some major concerns: 1) The bidirectional gated GNN doesn’t seem novel enough in comparison to previous work 2) I believe RL to Graph2Seq is a minor extension from Seq2Seq, since RL mostly deals with the decoder part which is common in across both Graph2Seq and Seq2Seq Arguments: 1) Adding the structure information to the encoder via the GNNs is an interesting angle for question generation.", "Compared to previous work, this paper proposes an additional deep alignment network on the encoder side to align paragraph and answer.", "However, the importance of this module is not well studied in the experiments section.", "I see that there is an ablation with/without this module but its not fairly compared with other aligning or simple techniques like in Zhao et al.", "(2018).", "2) The addition of RL component to Graph2Seq is a minor extension from the Seq2Seq model, because both of these models have similar decoder and RL mainly deals with it.", "Also the importance of each reward component or the effect of each phrase-matching automatic metrics is missing.", "3) Open part I am unclear about the dataset is which dataset version did you use sentence-level or paragraph level?", "I see that the baselines correspond to sentence-level, but the Figure-1 alignment module has input paragraph.", "Also I couldn’t find the SeqCopyNet (Zhou et al., 2018) split-2 BLEU4 score=13.02 in the original paper!", "4) Some of the latest papers which use BERT based models are not discussed in the paper which achieve state-of-the-art-results: “Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering” 5) For Table-2 results are the differences in the scores for the two models statistically significant?", "6) Table-3: First of all, evaluating only one metric is no sufficient.", "Please see latest papers that have also introduced new metrics that are good for QG evaluation, e.g., Q-BLEU.", "The gap between G2Ssta+BERT vs. G2Ssta+BERT+RL seems negligible, and missing statistical significance.", "7) Minor comments: BLUE -> BLEU; please cite KNN-style graph sparsification; the color choices in Figure-3 are creating confusion in understanding the model." ]
{ "criticism": 9, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 16, "praise": 3, "presentation_and_reporting": 6, "results_and_discussion": 4, "suggestion_and_solution": 5, "total": 21 }
0.428571
0
0.095238
0.761905
0.142857
0.285714
0.190476
0.238095
21
1
0
1
1
1
1
1
1
2.142857
1.132782
1.010076
Sharing Knowledge in Multi-Task Deep Reinforcement Learning
We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.
The submission derives bounds for approximate value and policy iteration for the multitask case in reinforcement learning. In addition, two common RL algorithms are adapted to demonstrate benefits of multitask RL given related tasks. The paper is mostly well written but sometimes introduces potentially unnecessarily complex mathematical notation including missing and dual definitions which slow down the reader. Examples of missing definitions are given by K an n on top of page 3. Generelly, the paper is quite self consist thanks to minor changes in existing algorithms to show experimental results for the theoretical insights, but instead of requiring the reader to find other papers it would be better to define all terminology in the appendix (see equation 6). The main contribution of the submission is the extension of an existing bound to the multitask case as the architectures are common across existing work. The extension is relevant given growing interest in meta and multitask RL but the changes in comparison to the single task case are minor and the experiment’s value purely lies in supporting this extension.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSharing Knowledge in Multi-Task Deep Reinforcement Learning\nWe study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe submission derives bounds for approximate value and policy iteration for the multitask case in reinforcement learning. In addition, two common RL algorithms are adapted to demonstrate benefits of multitask RL given related tasks. The paper is mostly well written but sometimes introduces potentially unnecessarily complex mathematical notation including missing and dual definitions which slow down the reader. Examples of missing definitions are given by K an n on top of page 3. Generelly, the paper is quite self consist thanks to minor changes in existing algorithms to show experimental results for the theoretical insights, but instead of requiring the reader to find other papers it would be better to define all terminology in the appendix (see equation 6). The main contribution of the submission is the extension of an existing bound to the multitask case as the architectures are common across existing work. The extension is relevant given growing interest in meta and multitask RL but the changes in comparison to the single task case are minor and the experiment’s value purely lies in supporting this extension.", "role": "assistant" } ]
[ "The submission derives bounds for approximate value and policy iteration for the multitask case in reinforcement learning.", "In addition, two common RL algorithms are adapted to demonstrate benefits of multitask RL given related tasks.", "The paper is mostly well written but sometimes introduces potentially unnecessarily complex mathematical notation including missing and dual definitions which slow down the reader.", "Examples of missing definitions are given by K an n on top of page 3.", "Generelly, the paper is quite self consist thanks to minor changes in existing algorithms to show experimental results for the theoretical insights, but instead of requiring the reader to find other papers it would be better to define all terminology in the appendix (see equation 6).", "The main contribution of the submission is the extension of an existing bound to the multitask case as the architectures are common across existing work.", "The extension is relevant given growing interest in meta and multitask RL but the changes in comparison to the single task case are minor and the experiment’s value purely lies in supporting this extension." ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 7 }
0.428571
0.142857
0.285714
0.714286
0.285714
0.428571
0.428571
0.142857
7
1
1
1
1
1
1
1
1
2.857143
2.288975
0.568167
Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
Q-learning suffers from overestimation bias, because it approximates the maximum action value using the maximum estimated action value. Algorithms have been proposed to reduce overestimation bias, but we lack an understanding of how bias interacts with performance, and the extent to which existing algorithms mitigate bias. In this paper, we 1) highlight that the effect of overestimation bias on learning efficiency is environment-dependent; 2) propose a generalization of Q-learning, called emph{Maxmin Q-learning}, which provides a parameter to flexibly control bias; 3) show theoretically that there exists a parameter choice for Maxmin Q-learning that leads to unbiased estimation with a lower approximation variance than Q-learning; and 4) prove the convergence of our algorithm in the tabular case, as well as convergence of several previous Q-learning variants, using a novel Generalized Q-learning framework. We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems.
This paper proposes a novel variant of Q-learning, called Maxmin Q-learning, to address the issue of overestimation bias Q-learning suffers from (variance of the reward of best action leading to overestimated reward). The idea is to keep a number of estimators each estimated using a different sub-sample, and taking the minimum of the (maximum) reward value of each. The paper gives theoretical analyses, in terms of the reduction in the overestimation bias, as well as the convergence of a class of generalized Q-learning methods including Maxmin Q-learning. The experiment section presents a thorough evaluation of the proposed method, including how the obtained rewards vary as a function of the variance of the reward function and as a function of learning steps, as compared to a number of existing methods such as the Double Q-learning method and its variants. The experimental results are quite convincing, and the theoretical analyses seem solid. Overall this is a well balanced paper which proposes a reasonable new idea, simple but effective, backed by sound theoretical analysis and well executed experimental evaluation.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMaxmin Q-learning: Controlling the Estimation Bias of Q-learning\nQ-learning suffers from overestimation bias, because it approximates the maximum action value using the maximum estimated action value. Algorithms have been proposed to reduce overestimation bias, but we lack an understanding of how bias interacts with performance, and the extent to which existing algorithms mitigate bias. In this paper, we 1) highlight that the effect of overestimation bias on learning efficiency is environment-dependent; 2) propose a generalization of Q-learning, called emph{Maxmin Q-learning}, which provides a parameter to flexibly control bias; 3) show theoretically that there exists a parameter choice for Maxmin Q-learning that leads to unbiased estimation with a lower approximation variance than Q-learning; and 4) prove the convergence of our algorithm in the tabular case, as well as convergence of several previous Q-learning variants, using a novel Generalized Q-learning framework. We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a novel variant of Q-learning, called Maxmin Q-learning, to address the issue of overestimation bias Q-learning suffers from (variance of the reward of best action leading to overestimated reward). The idea is to keep a number of estimators each estimated using a different sub-sample, and taking the minimum of the (maximum) reward value of each. The paper gives theoretical analyses, in terms of the reduction in the overestimation bias, as well as the convergence of a class of generalized Q-learning methods including Maxmin Q-learning. The experiment section presents a thorough evaluation of the proposed method, including how the obtained rewards vary as a function of the variance of the reward function and as a function of learning steps, as compared to a number of existing methods such as the Double Q-learning method and its variants. The experimental results are quite convincing, and the theoretical analyses seem solid. Overall this is a well balanced paper which proposes a reasonable new idea, simple but effective, backed by sound theoretical analysis and well executed experimental evaluation.", "role": "assistant" } ]
[ "This paper proposes a novel variant of Q-learning, called Maxmin Q-learning, to address the issue of overestimation bias Q-learning suffers from (variance of the reward of best action leading to overestimated reward).", "The idea is to keep a number of estimators each estimated using a different sub-sample, and taking the minimum of the (maximum) reward value of each.", "The paper gives theoretical analyses, in terms of the reduction in the overestimation bias, as well as the convergence of a class of generalized Q-learning methods including Maxmin Q-learning.", "The experiment section presents a thorough evaluation of the proposed method, including how the obtained rewards vary as a function of the variance of the reward function and as a function of learning steps, as compared to a number of existing methods such as the Double Q-learning method and its variants.", "The experimental results are quite convincing, and the theoretical analyses seem solid.", "Overall this is a well balanced paper which proposes a reasonable new idea, simple but effective, backed by sound theoretical analysis and well executed experimental evaluation." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 0, "results_and_discussion": 4, "suggestion_and_solution": 0, "total": 6 }
0
0
0.166667
0.833333
0.5
0
0.666667
0
6
0
0
1
1
1
0
1
0
2.166667
1.393328
0.773339
Depth-Adaptive Transformer
State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process. In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence. Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity. On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.
The authors study depth-adaptive Transformer models and show that they can perform well even under fairly basic strategies to stop the Transformer early. The paper is well written and the results convincing. There is not a lot of novelty but the results hold well, so it is a clear contribution. One main issue that prevents this reviewer from increasing the rating is the measure of speed the authors use: counting exit at every token. This is a fine measure for inference time, but during training the model proceeds on the whole sequence, right? Could you provide speed numbers for the training step? Is there any improvement over a baseline Transformer or is this technique solely for inference? (Which is still a contribution, but it should be made clear in the paper.) I thank the authors for the response and clarification. I stand by my score in the light of it.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDepth-Adaptive Transformer\nState of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process. In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence. Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity. On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors study depth-adaptive Transformer models and show that they can perform well even under fairly basic strategies to stop the Transformer early. The paper is well written and the results convincing. There is not a lot of novelty but the results hold well, so it is a clear contribution. One main issue that prevents this reviewer from increasing the rating is the measure of speed the authors use: counting exit at every token. This is a fine measure for inference time, but during training the model proceeds on the whole sequence, right? Could you provide speed numbers for the training step? Is there any improvement over a baseline Transformer or is this technique solely for inference? (Which is still a contribution, but it should be made clear in the paper.) I thank the authors for the response and clarification. I stand by my score in the light of it.", "role": "assistant" } ]
[ "The authors study depth-adaptive Transformer models and show that they can perform well even under fairly basic strategies to stop the Transformer early.", "The paper is well written and the results convincing.", "There is not a lot of novelty but the results hold well, so it is a clear contribution.", "One main issue that prevents this reviewer from increasing the rating is the measure of speed the authors use: counting exit at every token.", "This is a fine measure for inference time, but during training the model proceeds on the whole sequence, right?", "Could you provide speed numbers for the training step?", "Is there any improvement over a baseline Transformer or is this technique solely for inference?", "(Which is still a contribution, but it should be made clear in the paper.)", "I thank the authors for the response and clarification.", "I stand by my score in the light of it." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 4, "results_and_discussion": 5, "suggestion_and_solution": 2, "total": 10 }
0.1
0
0.2
0.5
0.3
0.4
0.5
0.2
10
1
0
1
1
1
1
1
1
2.2
2.057958
0.142042
Diverse Trajectory Forecasting with Determinantal Point Processes
The ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety-critical perception systems (e.g., autonomous vehicles). In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions. It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome (major mode). While generative models such as variational autoencoders (VAEs) have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse -- the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data. In this work, we propose to learn a diversity sampling function (DSF) that generates a diverse yet likely set of future trajectories. The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model (e.g., VAE) into a set of diverse trajectory samples. Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation. To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process (DPP). Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories. Our method is a novel application of DPPs to optimize a set of items (forecasted trajectories) in continuous space. We demonstrate the diversity of the trajectories produced by our approach on both low-dimensional 2D trajectory data and high-dimensional human motion data.
The authors propose a method to diversify samples generated from a VAE. The method is based on determinantal point processes in the latent space and relies on specifying a kernel in the sample space and a quality metric in the latent space. Increasing diversity of samples from VAE-like models is an important and common problem and the authors present a reasonably generic method for solving it. The experiments are sensible and show clear improvement over a reasonable selection of baselines. One improvement I would suggest is adding a more explicit discussion of how the proposed method avoids generating overly diverse, very improbable trajectories. It is also somewhat suspicious that the trajectories shown in Figure 3 are terminated so early.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDiverse Trajectory Forecasting with Determinantal Point Processes\nThe ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety-critical perception systems (e.g., autonomous vehicles). In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions. It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome (major mode). While generative models such as variational autoencoders (VAEs) have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse -- the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data. In this work, we propose to learn a diversity sampling function (DSF) that generates a diverse yet likely set of future trajectories. The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model (e.g., VAE) into a set of diverse trajectory samples. Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation. To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process (DPP). Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories. Our method is a novel application of DPPs to optimize a set of items (forecasted trajectories) in continuous space. We demonstrate the diversity of the trajectories produced by our approach on both low-dimensional 2D trajectory data and high-dimensional human motion data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a method to diversify samples generated from a VAE. The method is based on determinantal point processes in the latent space and relies on specifying a kernel in the sample space and a quality metric in the latent space. Increasing diversity of samples from VAE-like models is an important and common problem and the authors present a reasonably generic method for solving it. The experiments are sensible and show clear improvement over a reasonable selection of baselines. One improvement I would suggest is adding a more explicit discussion of how the proposed method avoids generating overly diverse, very improbable trajectories. It is also somewhat suspicious that the trajectories shown in Figure 3 are terminated so early.", "role": "assistant" } ]
[ "The authors propose a method to diversify samples generated from a VAE.", "The method is based on determinantal point processes in the latent space and relies on specifying a kernel in the sample space and a quality metric in the latent space.", "Increasing diversity of samples from VAE-like models is an important and common problem and the authors present a reasonably generic method for solving it.", "The experiments are sensible and show clear improvement over a reasonable selection of baselines.", "One improvement I would suggest is adding a more explicit discussion of how the proposed method avoids generating overly diverse, very improbable trajectories.", "It is also somewhat suspicious that the trajectories shown in Figure 3 are terminated so early." ]
{ "criticism": 1, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 6, "praise": 2, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 6 }
0.166667
0.166667
0.166667
1
0.333333
0.333333
0.166667
0.166667
6
1
1
1
1
1
1
1
1
2.5
1.726661
0.773339
Distance-Based Learning from Errors for Confidence Calibration
Deep neural networks (DNNs) are poorly-calibrated when trained in conventional ways. To improve confidence calibration of DNNs, we propose a novel training method, distance-based learning from errors (DBLE). DBLE bases its confidence estimation on distances in the representation space. We first adapt prototypical learning for training of a classification model for DBLE. It yields a representation space where a test sample*s distance to its ground-truth class center can calibrate the model*s performance. At inference, however, these distances are not available due to the lack of ground-truth label. To circumvent this by approximately inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model, by merely learning from mis-classified training samples, which we show to be highly-beneficial for effective learning. On multiple data sets and DNN architectures, we demonstrate that DBLE outperforms alternative single-modal confidence calibration approaches. DBLE also achieves comparable performance with computationally-expensive ensemble approaches with lower computational cost and lower number of parameters.
The paper describes an approach to improve the confidence on deep neural networks (DNN). The proposed approach uses a distance-based approach (distance to prototypes) and train using a confidence model with the classification model using mis-classified examples. It first learns using a loss function based on pre-computed centroids of each class evaluated from the training examples. At inference it assigns the class with the closest center. To estimate a confidence level, it learns a new model to estimate the distance using only the misclassified examples. The authors experimentally showed the benefits of the proposed approach over several methods. Finding prototypes and training with distances, on one hand, and evaluating confidence with Gaussian, on the other, assumes that the classes are *well defined*. Although the authors show promising results, it is not clear how well the proposed method will behave in more challenging classification tasks where the classes are mixed. Although the authors show that estimating the confidence with misclassified examples works better I will like to see a further analysis in the paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDistance-Based Learning from Errors for Confidence Calibration\nDeep neural networks (DNNs) are poorly-calibrated when trained in conventional ways. To improve confidence calibration of DNNs, we propose a novel training method, distance-based learning from errors (DBLE). DBLE bases its confidence estimation on distances in the representation space. We first adapt prototypical learning for training of a classification model for DBLE. It yields a representation space where a test sample*s distance to its ground-truth class center can calibrate the model*s performance. At inference, however, these distances are not available due to the lack of ground-truth label. To circumvent this by approximately inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model, by merely learning from mis-classified training samples, which we show to be highly-beneficial for effective learning. On multiple data sets and DNN architectures, we demonstrate that DBLE outperforms alternative single-modal confidence calibration approaches. DBLE also achieves comparable performance with computationally-expensive ensemble approaches with lower computational cost and lower number of parameters.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper describes an approach to improve the confidence on deep neural networks (DNN). The proposed approach uses a distance-based approach (distance to prototypes) and train using a confidence model with the classification model using mis-classified examples. It first learns using a loss function based on pre-computed centroids of each class evaluated from the training examples. At inference it assigns the class with the closest center. To estimate a confidence level, it learns a new model to estimate the distance using only the misclassified examples. The authors experimentally showed the benefits of the proposed approach over several methods. Finding prototypes and training with distances, on one hand, and evaluating confidence with Gaussian, on the other, assumes that the classes are *well defined*. Although the authors show promising results, it is not clear how well the proposed method will behave in more challenging classification tasks where the classes are mixed. Although the authors show that estimating the confidence with misclassified examples works better I will like to see a further analysis in the paper.", "role": "assistant" } ]
[ "The paper describes an approach to improve the confidence on deep neural networks (DNN).", "The proposed approach uses a distance-based approach (distance to prototypes) and train using a confidence model with the classification model using mis-classified examples.", "It first learns using a loss function based on pre-computed centroids of each class evaluated from the training examples.", "At inference it assigns the class with the closest center.", "To estimate a confidence level, it learns a new model to estimate the distance using only the misclassified examples.", "The authors experimentally showed the benefits of the proposed approach over several methods.", "Finding prototypes and training with distances, on one hand, and evaluating confidence with Gaussian, on the other, assumes that the classes are *well defined*.", "Although the authors show promising results, it is not clear how well the proposed method will behave in more challenging classification tasks where the classes are mixed.", "Although the authors show that estimating the confidence with misclassified examples works better I will like to see a further analysis in the paper." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 9, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 6, "suggestion_and_solution": 1, "total": 9 }
0.111111
0
0.111111
1
0.222222
0
0.666667
0.111111
9
1
0
1
1
1
0
1
1
2.222222
1.969703
0.252519
Distance-Based Learning from Errors for Confidence Calibration
Deep neural networks (DNNs) are poorly-calibrated when trained in conventional ways. To improve confidence calibration of DNNs, we propose a novel training method, distance-based learning from errors (DBLE). DBLE bases its confidence estimation on distances in the representation space. We first adapt prototypical learning for training of a classification model for DBLE. It yields a representation space where a test sample*s distance to its ground-truth class center can calibrate the model*s performance. At inference, however, these distances are not available due to the lack of ground-truth label. To circumvent this by approximately inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model, by merely learning from mis-classified training samples, which we show to be highly-beneficial for effective learning. On multiple data sets and DNN architectures, we demonstrate that DBLE outperforms alternative single-modal confidence calibration approaches. DBLE also achieves comparable performance with computationally-expensive ensemble approaches with lower computational cost and lower number of parameters.
The paper addresses the (long-standing) problem of classification systems being able to output reliable confidence estimates on its own output. The selected approach is to use the distance to (previously computed) class centers in multi-class classification to help compute the confidence interval. The method is shown to have comparable results on well known datasets but higher efficiency than 2 rival methods: ensemble ANNs and Bayesian neural networks. I would point out that such confidence intervals are all intuitive and have no statistical basis or other independent means of empirical validation. Experienced practitioners are aware of this, but I see that the paper steers wisely clear of overambitious claims. The general intuition of hybrid supervised and unsupervised learning for C.I. (or ellipse) estimation is not new, but an effective and compact representation of this intuition in a DNN context is a valuable contribution, well-placed in the literature - I recommend acceptance.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDistance-Based Learning from Errors for Confidence Calibration\nDeep neural networks (DNNs) are poorly-calibrated when trained in conventional ways. To improve confidence calibration of DNNs, we propose a novel training method, distance-based learning from errors (DBLE). DBLE bases its confidence estimation on distances in the representation space. We first adapt prototypical learning for training of a classification model for DBLE. It yields a representation space where a test sample*s distance to its ground-truth class center can calibrate the model*s performance. At inference, however, these distances are not available due to the lack of ground-truth label. To circumvent this by approximately inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model, by merely learning from mis-classified training samples, which we show to be highly-beneficial for effective learning. On multiple data sets and DNN architectures, we demonstrate that DBLE outperforms alternative single-modal confidence calibration approaches. DBLE also achieves comparable performance with computationally-expensive ensemble approaches with lower computational cost and lower number of parameters.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper addresses the (long-standing) problem of classification systems being able to output reliable confidence estimates on its own output. The selected approach is to use the distance to (previously computed) class centers in multi-class classification to help compute the confidence interval. The method is shown to have comparable results on well known datasets but higher efficiency than 2 rival methods: ensemble ANNs and Bayesian neural networks. I would point out that such confidence intervals are all intuitive and have no statistical basis or other independent means of empirical validation. Experienced practitioners are aware of this, but I see that the paper steers wisely clear of overambitious claims. The general intuition of hybrid supervised and unsupervised learning for C.I. (or ellipse) estimation is not new, but an effective and compact representation of this intuition in a DNN context is a valuable contribution, well-placed in the literature - I recommend acceptance.", "role": "assistant" } ]
[ "The paper addresses the (long-standing) problem of classification systems being able to output reliable confidence estimates on its own output.", "The selected approach is to use the distance to (previously computed) class centers in multi-class classification to help compute the confidence interval.", "The method is shown to have comparable results on well known datasets but higher efficiency than 2 rival methods: ensemble ANNs and Bayesian neural networks.", "I would point out that such confidence intervals are all intuitive and have no statistical basis or other independent means of empirical validation.", "Experienced practitioners are aware of this, but I see that the paper steers wisely clear of overambitious claims.", "The general intuition of hybrid supervised and unsupervised learning for C.I.", "(or ellipse) estimation is not new, but an effective and compact representation of this intuition in a DNN context is a valuable contribution, well-placed in the literature - I recommend acceptance." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 6, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 5, "suggestion_and_solution": 1, "total": 7 }
0.142857
0
0.142857
0.857143
0.285714
0
0.714286
0.142857
7
1
0
1
1
1
0
1
1
2.285714
1.717547
0.568167
To Relieve Your Headache of Training an MRF, Take AdVIL
We propose a black-box algorithm called {it Adversarial Variational Inference and Learning} (AdVIL) to perform inference and learning on a general Markov random field (MRF). AdVIL employs two variational distributions to approximately infer the latent variables and estimate the partition function of an MRF, respectively. The two variational distributions provide an estimate of the negative log-likelihood of the MRF as a minimax optimization problem, which is solved by stochastic gradient descent. AdVIL is proven convergent under certain conditions. On one hand, compared with contrastive divergence, AdVIL requires a minimal assumption about the model structure and can deal with a broader family of MRFs. On the other hand, compared with existing black-box methods, AdVIL provides a tighter estimate of the log partition function and achieves much better empirical results.
This manuscript proposes a new approach to fitting Markov Random Fields (MRFs). The general structure of the algorithm is amenable to many MRF structures and can be fairly straightforwardly applied to learning on a wide variety of problems. The theoretical analysis supports that the algorithm is reasonable. Experimental results show strong results on several different MRF models, albeit on relatively small problems. I am giving this manuscript a weak accept. The approach, to me, seems novel in fitting MRFs. However, the theoretical claims and their limitations need to be more realistically discussed, and the empirical results need to be shown on larger and more complex datasets. First, on the empirical results, there is a large literature on fitting RBM models, including many on scaling to much larger models. Given that AdVIL actually diverges on the logZ estimation as the number of iterations goes up makes me worry about the efficacy of this approach on larger RBMs (Figure 2 uses v=64 and h=15 while a common RBM on MNIST is v=784 and h=500, a huge difference in scale). As a large literature shows that estimating partition functions or normalizing constants gets much harder as the dimensionality goes up, I worry about this strategy and I think the manuscript would be greatly enhanced by looking at more common model sizes from the ML literature. Also, to me, the classic MCMC+SGD is as much a black box as the proposed technique. I realize that many of the top performing MCMC adapt specifically to the problem, but much of this is transferable between systems and can be put in simple sampling schemes. Table 1 should be updated to include these typical techniques, because it is not clear that the proposed system actually outperforms the typical PCD-1 scheme (especially given in Figure 3). Or succinctly, it should be made clear why I should use this over an MCMC approach. To address these concerns, I would like the authors to answer how their proposed algorithm works in larger MRFs and do a more complete analysis compared to more traditional strategies. Second, the theoretical claims are nice, but the manuscript should be revised to address the limitations of the theory. In particular, Lemma 1 is extremely strong, and I disagree with the assessment that it is *much weaker* than the typical nonparametric assumption. It seems that as the optimization gets close to the solution, this is essentially the exact same condition. The authors need to clarify how exactly this is different, and dive into the practical implementations. This also seems like it would get increasingly difficult as the number of hidden and visible units increases, so they should address how this Lemma holds as the theory scales.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTo Relieve Your Headache of Training an MRF, Take AdVIL\nWe propose a black-box algorithm called {it Adversarial Variational Inference and Learning} (AdVIL) to perform inference and learning on a general Markov random field (MRF). AdVIL employs two variational distributions to approximately infer the latent variables and estimate the partition function of an MRF, respectively. The two variational distributions provide an estimate of the negative log-likelihood of the MRF as a minimax optimization problem, which is solved by stochastic gradient descent. AdVIL is proven convergent under certain conditions. On one hand, compared with contrastive divergence, AdVIL requires a minimal assumption about the model structure and can deal with a broader family of MRFs. On the other hand, compared with existing black-box methods, AdVIL provides a tighter estimate of the log partition function and achieves much better empirical results.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis manuscript proposes a new approach to fitting Markov Random Fields (MRFs). The general structure of the algorithm is amenable to many MRF structures and can be fairly straightforwardly applied to learning on a wide variety of problems. The theoretical analysis supports that the algorithm is reasonable. Experimental results show strong results on several different MRF models, albeit on relatively small problems. I am giving this manuscript a weak accept. The approach, to me, seems novel in fitting MRFs. However, the theoretical claims and their limitations need to be more realistically discussed, and the empirical results need to be shown on larger and more complex datasets. First, on the empirical results, there is a large literature on fitting RBM models, including many on scaling to much larger models. Given that AdVIL actually diverges on the logZ estimation as the number of iterations goes up makes me worry about the efficacy of this approach on larger RBMs (Figure 2 uses v=64 and h=15 while a common RBM on MNIST is v=784 and h=500, a huge difference in scale). As a large literature shows that estimating partition functions or normalizing constants gets much harder as the dimensionality goes up, I worry about this strategy and I think the manuscript would be greatly enhanced by looking at more common model sizes from the ML literature. Also, to me, the classic MCMC+SGD is as much a black box as the proposed technique. I realize that many of the top performing MCMC adapt specifically to the problem, but much of this is transferable between systems and can be put in simple sampling schemes. Table 1 should be updated to include these typical techniques, because it is not clear that the proposed system actually outperforms the typical PCD-1 scheme (especially given in Figure 3). Or succinctly, it should be made clear why I should use this over an MCMC approach. To address these concerns, I would like the authors to answer how their proposed algorithm works in larger MRFs and do a more complete analysis compared to more traditional strategies. Second, the theoretical claims are nice, but the manuscript should be revised to address the limitations of the theory. In particular, Lemma 1 is extremely strong, and I disagree with the assessment that it is *much weaker* than the typical nonparametric assumption. It seems that as the optimization gets close to the solution, this is essentially the exact same condition. The authors need to clarify how exactly this is different, and dive into the practical implementations. This also seems like it would get increasingly difficult as the number of hidden and visible units increases, so they should address how this Lemma holds as the theory scales.", "role": "assistant" } ]
[ "This manuscript proposes a new approach to fitting Markov Random Fields (MRFs).", "The general structure of the algorithm is amenable to many MRF structures and can be fairly straightforwardly applied to learning on a wide variety of problems.", "The theoretical analysis supports that the algorithm is reasonable.", "Experimental results show strong results on several different MRF models, albeit on relatively small problems.", "I am giving this manuscript a weak accept.", "The approach, to me, seems novel in fitting MRFs.", "However, the theoretical claims and their limitations need to be more realistically discussed, and the empirical results need to be shown on larger and more complex datasets.", "First, on the empirical results, there is a large literature on fitting RBM models, including many on scaling to much larger models.", "Given that AdVIL actually diverges on the logZ estimation as the number of iterations goes up makes me worry about the efficacy of this approach on larger RBMs (Figure 2 uses v=64 and h=15 while a common RBM on MNIST is v=784 and h=500, a huge difference in scale).", "As a large literature shows that estimating partition functions or normalizing constants gets much harder as the dimensionality goes up, I worry about this strategy and I think the manuscript would be greatly enhanced by looking at more common model sizes from the ML literature.", "Also, to me, the classic MCMC+SGD is as much a black box as the proposed technique.", "I realize that many of the top performing MCMC adapt specifically to the problem, but much of this is transferable between systems and can be put in simple sampling schemes.", "Table 1 should be updated to include these typical techniques, because it is not clear that the proposed system actually outperforms the typical PCD-1 scheme (especially given in Figure 3).", "Or succinctly, it should be made clear why I should use this over an MCMC approach.", "To address these concerns, I would like the authors to answer how their proposed algorithm works in larger MRFs and do a more complete analysis compared to more traditional strategies.", "Second, the theoretical claims are nice, but the manuscript should be revised to address the limitations of the theory.", "In particular, Lemma 1 is extremely strong, and I disagree with the assessment that it is *much weaker* than the typical nonparametric assumption.", "It seems that as the optimization gets close to the solution, this is essentially the exact same condition.", "The authors need to clarify how exactly this is different, and dive into the practical implementations.", "This also seems like it would get increasingly difficult as the number of hidden and visible units increases, so they should address how this Lemma holds as the theory scales." ]
{ "criticism": 4, "example": 2, "importance_and_relevance": 3, "materials_and_methods": 16, "praise": 5, "presentation_and_reporting": 2, "results_and_discussion": 5, "suggestion_and_solution": 8, "total": 20 }
0.2
0.1
0.15
0.8
0.25
0.1
0.25
0.4
20
1
1
1
1
1
1
1
1
2.25
1.476661
0.773339
Weakly Supervised Clustering by Exploiting Unique Class Count
A weakly supervised learning based clustering framework is proposed in this paper. As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag. In this task, no annotations on individual instances inside the bag are needed during training of the models. We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training. We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known. Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model.
This paper proposes a new type of weakly supervised clustering / multiple instance learning (MIL) problem in which bags of instances (data points) are labeled with a *unique class count (UCC)*, rather than any bag-level or instance-level labels. For example, a histopathology slide (the bag), consisting of many individual pixels to be labeled (the instances) could be labeled at the bag level only with UCC = 1 (for only healthy or only metastatic) or UCC = 2 (for mixed / border case). The paper then proposes an approach for clustering instances based on the following two-step approach: (1) a UCC model is trained to predict the UCC given an input bag, and (2) the features of this learned UCC model are used in an unsupervised clustering algorithm to the get the instance-level clusters / labels. The paper also provides a theoretical argument for why this approach is feasible. Overall this paper proposes a (1) novel and creative approach that is well validated both (2) theoretically and (3) empirically, and relevant to a real-world problem, and thus I believe should be accepted. In slightly more detail: - (1) MIL where we are given bag-level labels only is a well-studied problem that occurs in many real world settings, such as the histopathology one used in this paper. However to this reader*s knowledge, this is a new variant that is both creative and motivated by an actual real-world study, which is exciting and alone warrants presentation at the conference in my opinion. - (2) The theoretical treatment is high-level, but still serves a clear purpose of establishing feasibility of the proposed method- this modest and appropriate purpose serves the paper well. - (3) The empirical results are thorough---e.g. the use of the two loss components for the UCC model are appropriately ablated, there are a range of baseline approaches compared to, multiple evaluation points are provided (i.e. both UCC prediction and final clustering metrics), and a real world use case is presented--and the results are impressive. Some comments to improve the paper: - The connection of the proposed UCC approach to the motivating histopath example should be explicitly stated upfront to help the reader understand how this method could be used and how it makes sense! - It seems that the KDE element of the UCC model was chosen in part to enable the theoretical analysis? If so, this should be clearly stated to help the reader understand the design rationale. - Either way, it seems that a different approach than the KDE layer could have been taken- this should be added to the ablation experiments
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWeakly Supervised Clustering by Exploiting Unique Class Count\nA weakly supervised learning based clustering framework is proposed in this paper. As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag. In this task, no annotations on individual instances inside the bag are needed during training of the models. We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training. We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known. Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a new type of weakly supervised clustering / multiple instance learning (MIL) problem in which bags of instances (data points) are labeled with a *unique class count (UCC)*, rather than any bag-level or instance-level labels. For example, a histopathology slide (the bag), consisting of many individual pixels to be labeled (the instances) could be labeled at the bag level only with UCC = 1 (for only healthy or only metastatic) or UCC = 2 (for mixed / border case). The paper then proposes an approach for clustering instances based on the following two-step approach: (1) a UCC model is trained to predict the UCC given an input bag, and (2) the features of this learned UCC model are used in an unsupervised clustering algorithm to the get the instance-level clusters / labels. The paper also provides a theoretical argument for why this approach is feasible. Overall this paper proposes a (1) novel and creative approach that is well validated both (2) theoretically and (3) empirically, and relevant to a real-world problem, and thus I believe should be accepted. In slightly more detail: - (1) MIL where we are given bag-level labels only is a well-studied problem that occurs in many real world settings, such as the histopathology one used in this paper. However to this reader*s knowledge, this is a new variant that is both creative and motivated by an actual real-world study, which is exciting and alone warrants presentation at the conference in my opinion. - (2) The theoretical treatment is high-level, but still serves a clear purpose of establishing feasibility of the proposed method- this modest and appropriate purpose serves the paper well. - (3) The empirical results are thorough---e.g. the use of the two loss components for the UCC model are appropriately ablated, there are a range of baseline approaches compared to, multiple evaluation points are provided (i.e. both UCC prediction and final clustering metrics), and a real world use case is presented--and the results are impressive. Some comments to improve the paper: - The connection of the proposed UCC approach to the motivating histopath example should be explicitly stated upfront to help the reader understand how this method could be used and how it makes sense! - It seems that the KDE element of the UCC model was chosen in part to enable the theoretical analysis? If so, this should be clearly stated to help the reader understand the design rationale. - Either way, it seems that a different approach than the KDE layer could have been taken- this should be added to the ablation experiments", "role": "assistant" } ]
[ "This paper proposes a new type of weakly supervised clustering / multiple instance learning (MIL) problem in which bags of instances (data points) are labeled with a *unique class count (UCC)*, rather than any bag-level or instance-level labels.", "For example, a histopathology slide (the bag), consisting of many individual pixels to be labeled (the instances) could be labeled at the bag level only with UCC = 1 (for only healthy or only metastatic) or UCC = 2 (for mixed / border case).", "The paper then proposes an approach for clustering instances based on the following two-step approach: (1) a UCC model is trained to predict the UCC given an input bag, and (2) the features of this learned UCC model are used in an unsupervised clustering algorithm to the get the instance-level clusters / labels.", "The paper also provides a theoretical argument for why this approach is feasible.", "Overall this paper proposes a (1) novel and creative approach that is well validated both (2) theoretically and (3) empirically, and relevant to a real-world problem, and thus I believe should be accepted.", "In slightly more detail: - (1) MIL where we are given bag-level labels only is a well-studied problem that occurs in many real world settings, such as the histopathology one used in this paper.", "However to this reader*s knowledge, this is a new variant that is both creative and motivated by an actual real-world study, which is exciting and alone warrants presentation at the conference in my opinion.", "- (2) The theoretical treatment is high-level, but still serves a clear purpose of establishing feasibility of the proposed method- this modest and appropriate purpose serves the paper well.", "- (3) The empirical results are thorough---e.g.", "the use of the two loss components for the UCC model are appropriately ablated, there are a range of baseline approaches compared to, multiple evaluation points are provided (i.e.", "both UCC prediction and final clustering metrics), and a real world use case is presented--and the results are impressive.", "Some comments to improve the paper: - The connection of the proposed UCC approach to the motivating histopath example should be explicitly stated upfront to help the reader understand how this method could be used and how it makes sense!", "- It seems that the KDE element of the UCC model was chosen in part to enable the theoretical analysis?", "If so, this should be clearly stated to help the reader understand the design rationale.", "- Either way, it seems that a different approach than the KDE layer could have been taken- this should be added to the ablation experiments" ]
{ "criticism": 0, "example": 1, "importance_and_relevance": 4, "materials_and_methods": 13, "praise": 5, "presentation_and_reporting": 5, "results_and_discussion": 3, "suggestion_and_solution": 5, "total": 15 }
0
0.066667
0.266667
0.866667
0.333333
0.333333
0.2
0.333333
15
0
1
1
1
1
1
1
1
2.4
2.33687
0.06313
Scalable and Order-robust Continual Learning with Additive Parameter Decomposition
While recent continual learning methods largely alleviate the catastrophic problem on toy-sized datasets, there are issues that remain to be tackled in order to apply them to real-world problem domains. First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks. Secondly, it needs to tackle the problem of order-sensitivity, where the performance of the tasks largely varies based on the order of the task arrival sequence, as it may cause serious problems where fairness plays a critical role (e.g. medical diagnosis). To tackle these practical challenges, we propose a novel continual learning method that is scalable as well as order-robust, which instead of learning a completely shared set of weights, represents the parameters for each task as a sum of task-shared and sparse task-adaptive parameters. With our Additive Parameter Decomposition (APD), the task-adaptive parameters for earlier tasks remain mostly unaffected, where we update them only to reflect the changes made to the task-shared parameters. This decomposition of parameters effectively prevents catastrophic forgetting and order-sensitivity, while being computation- and memory-efficient. Further, we can achieve even better scalability with APD using hierarchical knowledge consolidation, which clusters the task-adaptive parameters to obtain hierarchically shared parameters. We validate our network with APD, APD-Net, on multiple benchmark datasets against state-of-the-art continual learning methods, which it largely outperforms in accuracy, scalability, and order-robustness.
The paper introduces an additive parameter decomposition (APD) approach to continual learning in the sequential task classification setting and evaluates it across a number of dimensions, including task order robustness, which is comparatively less well researched. Extensive experiments show the novel approach has superior performance to relevant baselines, and provides important data about the order robustness of popular existing approaches. Pros: - Paper tackles the task-order sensitivity challenge in continual learning and introduces an effective order-robust approach. - Method is scalable, parameter growth is logarithmic, forgetting of irrelevant knowledge comes for free. - Baselines are relevant, although they only cover one of the families of approaches. Performance looks consistently better than baselines both in terms of classification accuracy as well as order robustness. - Although non-standard, the Omniglot experiment with 100 tasks is interesting w.r.t. both measures. Performance is on par with STL, also in terms of robustness to order. - Order robustness measures are introduced and motivated. This is particularly relevant for future research beyond simple accuracy comparisons. Cons: - Network architecture used for experiments as well as the exact details of the datasets are non-standard, making results very hard to compare with other papers, so one needs to rely on provided baselines only. Please post citations for papers where the experimental methodologies were adopted from if this is not the case, I many not be familiar with them! - Classification accuracies seems relatively low across the board, especially for CIFAR-100 results. Could you please report some results in the experimental setting used by one of your baselines in the original paper? I am inclined to recommend acceptance due to novelty of order robustness analyses and competitive properties of the method, but I would like clarifications to my experimental questions.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nScalable and Order-robust Continual Learning with Additive Parameter Decomposition\nWhile recent continual learning methods largely alleviate the catastrophic problem on toy-sized datasets, there are issues that remain to be tackled in order to apply them to real-world problem domains. First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks. Secondly, it needs to tackle the problem of order-sensitivity, where the performance of the tasks largely varies based on the order of the task arrival sequence, as it may cause serious problems where fairness plays a critical role (e.g. medical diagnosis). To tackle these practical challenges, we propose a novel continual learning method that is scalable as well as order-robust, which instead of learning a completely shared set of weights, represents the parameters for each task as a sum of task-shared and sparse task-adaptive parameters. With our Additive Parameter Decomposition (APD), the task-adaptive parameters for earlier tasks remain mostly unaffected, where we update them only to reflect the changes made to the task-shared parameters. This decomposition of parameters effectively prevents catastrophic forgetting and order-sensitivity, while being computation- and memory-efficient. Further, we can achieve even better scalability with APD using hierarchical knowledge consolidation, which clusters the task-adaptive parameters to obtain hierarchically shared parameters. We validate our network with APD, APD-Net, on multiple benchmark datasets against state-of-the-art continual learning methods, which it largely outperforms in accuracy, scalability, and order-robustness.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper introduces an additive parameter decomposition (APD) approach to continual learning in the sequential task classification setting and evaluates it across a number of dimensions, including task order robustness, which is comparatively less well researched. Extensive experiments show the novel approach has superior performance to relevant baselines, and provides important data about the order robustness of popular existing approaches. Pros: - Paper tackles the task-order sensitivity challenge in continual learning and introduces an effective order-robust approach. - Method is scalable, parameter growth is logarithmic, forgetting of irrelevant knowledge comes for free. - Baselines are relevant, although they only cover one of the families of approaches. Performance looks consistently better than baselines both in terms of classification accuracy as well as order robustness. - Although non-standard, the Omniglot experiment with 100 tasks is interesting w.r.t. both measures. Performance is on par with STL, also in terms of robustness to order. - Order robustness measures are introduced and motivated. This is particularly relevant for future research beyond simple accuracy comparisons. Cons: - Network architecture used for experiments as well as the exact details of the datasets are non-standard, making results very hard to compare with other papers, so one needs to rely on provided baselines only. Please post citations for papers where the experimental methodologies were adopted from if this is not the case, I many not be familiar with them! - Classification accuracies seems relatively low across the board, especially for CIFAR-100 results. Could you please report some results in the experimental setting used by one of your baselines in the original paper? I am inclined to recommend acceptance due to novelty of order robustness analyses and competitive properties of the method, but I would like clarifications to my experimental questions.", "role": "assistant" } ]
[ "The paper introduces an additive parameter decomposition (APD) approach to continual learning in the sequential task classification setting and evaluates it across a number of dimensions, including task order robustness, which is comparatively less well researched.", "Extensive experiments show the novel approach has superior performance to relevant baselines, and provides important data about the order robustness of popular existing approaches.", "Pros: - Paper tackles the task-order sensitivity challenge in continual learning and introduces an effective order-robust approach.", "- Method is scalable, parameter growth is logarithmic, forgetting of irrelevant knowledge comes for free.", "- Baselines are relevant, although they only cover one of the families of approaches.", "Performance looks consistently better than baselines both in terms of classification accuracy as well as order robustness.", "- Although non-standard, the Omniglot experiment with 100 tasks is interesting w.r.t.", "both measures.", "Performance is on par with STL, also in terms of robustness to order.", "- Order robustness measures are introduced and motivated.", "This is particularly relevant for future research beyond simple accuracy comparisons.", "Cons: - Network architecture used for experiments as well as the exact details of the datasets are non-standard, making results very hard to compare with other papers, so one needs to rely on provided baselines only.", "Please post citations for papers where the experimental methodologies were adopted from if this is not the case, I many not be familiar with them!", "- Classification accuracies seems relatively low across the board, especially for CIFAR-100 results.", "Could you please report some results in the experimental setting used by one of your baselines in the original paper?", "I am inclined to recommend acceptance due to novelty of order robustness analyses and competitive properties of the method, but I would like clarifications to my experimental questions." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 16, "praise": 6, "presentation_and_reporting": 3, "results_and_discussion": 5, "suggestion_and_solution": 4, "total": 16 }
0.1875
0
0.1875
1
0.375
0.1875
0.3125
0.25
16
1
0
1
1
1
1
1
1
2.5
2.357958
0.142042
Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring
The use of deep pre-trained transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on four tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks.
This paper describes an approach for scoring sentences based on pre-trained transformers. The paper describes two main existing approaches for this task, namely bi-encoders and cross-encoders and then proposes a new formulation called poly-encoders which aims to sit between the existing approaches offering high accuracy -- similarly to cross-encoders -- and high efficiency -- similarly to bi-encoders. The paper is well written and although this is not related to my research I enjoying reading it. The approach proposed seems reasonable to me, and of sufficient novelty while the results presented are impressive. Moreover the paper seems a good fit for ICLR.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPoly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring\nThe use of deep pre-trained transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on four tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper describes an approach for scoring sentences based on pre-trained transformers. The paper describes two main existing approaches for this task, namely bi-encoders and cross-encoders and then proposes a new formulation called poly-encoders which aims to sit between the existing approaches offering high accuracy -- similarly to cross-encoders -- and high efficiency -- similarly to bi-encoders. The paper is well written and although this is not related to my research I enjoying reading it. The approach proposed seems reasonable to me, and of sufficient novelty while the results presented are impressive. Moreover the paper seems a good fit for ICLR.", "role": "assistant" } ]
[ "This paper describes an approach for scoring sentences based on pre-trained transformers.", "The paper describes two main existing approaches for this task, namely bi-encoders and cross-encoders and then proposes a new formulation called poly-encoders which aims to sit between the existing approaches offering high accuracy -- similarly to cross-encoders -- and high efficiency -- similarly to bi-encoders.", "The paper is well written and although this is not related to my research I enjoying reading it.", "The approach proposed seems reasonable to me, and of sufficient novelty while the results presented are impressive.", "Moreover the paper seems a good fit for ICLR." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 3, "praise": 3, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 5 }
0.2
0
0.2
0.6
0.6
0.4
0.2
0
5
1
0
1
1
1
1
1
0
2.2
1.189924
1.010076
Neural Stored-program Memory
Neural networks powered with external memory simulate computer behaviors. These models, which use the memory to store data for a neural controller, can learn algorithms and other complex tasks. In this paper, we introduce a new memory to store weights for the controller, analogous to the stored-program memory in modern computer architectures. The proposed model, dubbed Neural Stored-program Memory, augments current memory-augmented neural networks, creating differentiable machines that can switch programs through time, adapt to variable contexts and thus fully resemble the Universal Turing Machine or Von Neumann Architecture. A wide range of experiments demonstrate that the resulting machines not only excel in classical algorithmic problems, but also have potential for compositional, continual, few-shot learning and question-answering tasks.
## Update I am changing my score to 7 (not adjusting the discretised score in the openreview interface though) Summary: The authors present the Neural Stored-program Memory (NSM), an architectural addition which is suitable for a large class of existing MANN models (NTM, DNC, LRUA etc). The model contains a second memory of key value pairs, which on each timestep can be read via cosine distance similarity on the keys, and for which the values contain all the weights necessary for the Interface Network (IN). The IN also receives the recurrent controller output, and produces an interface vector which is used to read or write to the regular memory. The definition is sufficiently general to allow the computation between an interface vector and the regular memory to be done according to NTM/DNC/etc schemes. NSM allows a MANN to effectively switch programs at every timestep, more closely matching classical computers and the Universal Turing Machine. The authors include a wide range of experiments, both showing faster learning on standard NTM tasks, and also introduce *Sequencing tasks* for which a single episode contains multiple instances of the standard NTM tasks in sequence. In principle this should allow the whole system to learn different programs for the subproblems, and dynamically switch between them as soon as the task changes. In principle a standard NTM could learn to do many of these tasks in a row, but the authors show that even when an NTM combined with NSM (denoted NUTM) has fewer trainable weights in total than a plain NTM, for some task sequence combinations the NUTM learns much faster. Experiments on continual learning, few shot learning and text based question answering back up the wide applicability of this technique, with many strong results including new SOTA on bAbI. Decision: Accept. Key reasons are that this is a relatively straightforward application of hypernetworks within a recurrent controller, which both has appealing justifications in the context of both Von Neumann and Turing models of computation. This simplicity is a positive, and the authors make a convincing argument that NSM can be applied to any MANN solving a sufficiently complex problem. A secondary reason is the extensive evaluation & hyperparam details, and while I do have some minor points on which I think the paper could be clarified (see below) I think this is overall a very nice paper. With the clarifications below addressed, I would give this paper a 7 but from the options I have available to choose from, 6 is the best fit for the current manuscript. Supporting arguments: The visualization of *program distribution* shows extremely clear phase changes as the underlying task changes - both within a single task (the reading phase vs writing phase for repeat copy) and across task boundaries (copy -> associative recall). Combined with the learning curves / results in the various tables, it is to me clear that the model is performing as designed. Despite the name of the *Meta Network*, and having a vague flavour of metalearning, the model does not require any elaborate *backprop through the inner training loop* of MAML et al, which is a benefit in my opinion. Additional feedback: Some of the experiments could be slightly more convincing - particularly Figure 5 which is lacking error bars. In my experience these architectures can have relatively high variance in performance, compared to other supervised domains, as evidenced by the spicy learning curves even in Figure 2 a). Error bars across multiple runs for figure 5 would be good, particularly for the points where the lines are close (eg after training on PS, the performance for C and RC is close for both models). The formatting of some figures and graphs could be improved: * Figure 1 - the graph could use some more text labels, rather than mathematical notation which needs to be referred to below. The colours are also slightly confusing - the program memory has slightly different shades of orange for the keys, slightly different shades of green for the values, whereas the regular memory has a slightly wider variety of colours. I was not sure at first whether this should be interpreted as indicating something important. Additionally, the value read from the NSM is reshaped to a 2x4 shape with various shades of blue, which then becomes the weights for a pink network? I think the colour adds little and may confuse people. There are some other issues, such as the value which should really come from the main memory as a result of the interface vector. With the supporting text, understanding the system is not hard, but I feel another pass over the diagram would benefit the camera ready version - consider ditching colour entirely, unless there is going to be some consistent meaning to things being the same colour vs not. Text labels with the various arrows (eg *vector used to lookup in NSM*, *used as network weights for Interface Network*) may improve clarity. should also be labelled on the diagram. * The y axis scale in figure 5 is very confusing - it took me several looks before I noticed that it goes from 50% to 100%, due to each number having digits in different vertical positions and different font sizes * Figure 2 y axis scale is a bit too small to easily read. * Figure 3: both read and write locations are shown on a single plot, but the green line that separates them is lamost unreadable on a printout. The task dependent, presumably manually chosen, approach to picking where this visualisation toggle should be made is a bit arbitrary - I would prefer to see read and write locations as separate subplots, as in the appendix. I found the exact details of number of programs versus number of heads (and the type of those heads) a bit confusing. In www.github.com/deepmind/dnc the code has *number of read heads* and *number of write heads* being two independently set integer parameters. This paper refers to * control heads*, but I am not exactly clear on how these are divded between read and write duties. Algorithm 1 references that write heads return zero on line 8 but not other mention of the two mutually exclusive types is made. The text towards the end of section 4.1 refers to the *no-read* strategy being assigned mostly to one program - this makes it sound like each program can (softly?) interpolate between reading or writing (or both?). The start of appendix B shows program distribution for read and write heads separately, but this then begs the question of what is happening in the examples for which only one program distribution is shown (eg Figure 3) - clearly we need to read and write for all of these tasks, so is one head with two programs doing both simultaneously? In the interests of reproducability, clarification here is essential. Separately, the decision of having a different program memory per control head is interesting - it*s not obvious to me why this would be necessary, surely one program memory would be sufficient as long as thre is a different (alg 1 line 5) network to choose a different program for the head? It would be good to see a line added to the paper justifying this choice. It seems like not all training hyperparams are specified in the appendix - eg the settings specified in Table 9 only apply to the few shot learning task, additional hyperparams are specified for bAbI, but I cannot see the training hyperparams for the experiments in sections 4.1 - 4.4. Minor correction: *As Turing Machine is finite state automata* -> *As Turing Machines are finite state automata*
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Stored-program Memory\nNeural networks powered with external memory simulate computer behaviors. These models, which use the memory to store data for a neural controller, can learn algorithms and other complex tasks. In this paper, we introduce a new memory to store weights for the controller, analogous to the stored-program memory in modern computer architectures. The proposed model, dubbed Neural Stored-program Memory, augments current memory-augmented neural networks, creating differentiable machines that can switch programs through time, adapt to variable contexts and thus fully resemble the Universal Turing Machine or Von Neumann Architecture. A wide range of experiments demonstrate that the resulting machines not only excel in classical algorithmic problems, but also have potential for compositional, continual, few-shot learning and question-answering tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n## Update I am changing my score to 7 (not adjusting the discretised score in the openreview interface though) Summary: The authors present the Neural Stored-program Memory (NSM), an architectural addition which is suitable for a large class of existing MANN models (NTM, DNC, LRUA etc). The model contains a second memory of key value pairs, which on each timestep can be read via cosine distance similarity on the keys, and for which the values contain all the weights necessary for the Interface Network (IN). The IN also receives the recurrent controller output, and produces an interface vector which is used to read or write to the regular memory. The definition is sufficiently general to allow the computation between an interface vector and the regular memory to be done according to NTM/DNC/etc schemes. NSM allows a MANN to effectively switch programs at every timestep, more closely matching classical computers and the Universal Turing Machine. The authors include a wide range of experiments, both showing faster learning on standard NTM tasks, and also introduce *Sequencing tasks* for which a single episode contains multiple instances of the standard NTM tasks in sequence. In principle this should allow the whole system to learn different programs for the subproblems, and dynamically switch between them as soon as the task changes. In principle a standard NTM could learn to do many of these tasks in a row, but the authors show that even when an NTM combined with NSM (denoted NUTM) has fewer trainable weights in total than a plain NTM, for some task sequence combinations the NUTM learns much faster. Experiments on continual learning, few shot learning and text based question answering back up the wide applicability of this technique, with many strong results including new SOTA on bAbI. Decision: Accept. Key reasons are that this is a relatively straightforward application of hypernetworks within a recurrent controller, which both has appealing justifications in the context of both Von Neumann and Turing models of computation. This simplicity is a positive, and the authors make a convincing argument that NSM can be applied to any MANN solving a sufficiently complex problem. A secondary reason is the extensive evaluation & hyperparam details, and while I do have some minor points on which I think the paper could be clarified (see below) I think this is overall a very nice paper. With the clarifications below addressed, I would give this paper a 7 but from the options I have available to choose from, 6 is the best fit for the current manuscript. Supporting arguments: The visualization of *program distribution* shows extremely clear phase changes as the underlying task changes - both within a single task (the reading phase vs writing phase for repeat copy) and across task boundaries (copy -> associative recall). Combined with the learning curves / results in the various tables, it is to me clear that the model is performing as designed. Despite the name of the *Meta Network*, and having a vague flavour of metalearning, the model does not require any elaborate *backprop through the inner training loop* of MAML et al, which is a benefit in my opinion. Additional feedback: Some of the experiments could be slightly more convincing - particularly Figure 5 which is lacking error bars. In my experience these architectures can have relatively high variance in performance, compared to other supervised domains, as evidenced by the spicy learning curves even in Figure 2 a). Error bars across multiple runs for figure 5 would be good, particularly for the points where the lines are close (eg after training on PS, the performance for C and RC is close for both models). The formatting of some figures and graphs could be improved: * Figure 1 - the graph could use some more text labels, rather than mathematical notation which needs to be referred to below. The colours are also slightly confusing - the program memory has slightly different shades of orange for the keys, slightly different shades of green for the values, whereas the regular memory has a slightly wider variety of colours. I was not sure at first whether this should be interpreted as indicating something important. Additionally, the value read from the NSM is reshaped to a 2x4 shape with various shades of blue, which then becomes the weights for a pink network? I think the colour adds little and may confuse people. There are some other issues, such as the value which should really come from the main memory as a result of the interface vector. With the supporting text, understanding the system is not hard, but I feel another pass over the diagram would benefit the camera ready version - consider ditching colour entirely, unless there is going to be some consistent meaning to things being the same colour vs not. Text labels with the various arrows (eg *vector used to lookup in NSM*, *used as network weights for Interface Network*) may improve clarity. should also be labelled on the diagram. * The y axis scale in figure 5 is very confusing - it took me several looks before I noticed that it goes from 50% to 100%, due to each number having digits in different vertical positions and different font sizes * Figure 2 y axis scale is a bit too small to easily read. * Figure 3: both read and write locations are shown on a single plot, but the green line that separates them is lamost unreadable on a printout. The task dependent, presumably manually chosen, approach to picking where this visualisation toggle should be made is a bit arbitrary - I would prefer to see read and write locations as separate subplots, as in the appendix. I found the exact details of number of programs versus number of heads (and the type of those heads) a bit confusing. In www.github.com/deepmind/dnc the code has *number of read heads* and *number of write heads* being two independently set integer parameters. This paper refers to * control heads*, but I am not exactly clear on how these are divded between read and write duties. Algorithm 1 references that write heads return zero on line 8 but not other mention of the two mutually exclusive types is made. The text towards the end of section 4.1 refers to the *no-read* strategy being assigned mostly to one program - this makes it sound like each program can (softly?) interpolate between reading or writing (or both?). The start of appendix B shows program distribution for read and write heads separately, but this then begs the question of what is happening in the examples for which only one program distribution is shown (eg Figure 3) - clearly we need to read and write for all of these tasks, so is one head with two programs doing both simultaneously? In the interests of reproducability, clarification here is essential. Separately, the decision of having a different program memory per control head is interesting - it*s not obvious to me why this would be necessary, surely one program memory would be sufficient as long as thre is a different (alg 1 line 5) network to choose a different program for the head? It would be good to see a line added to the paper justifying this choice. It seems like not all training hyperparams are specified in the appendix - eg the settings specified in Table 9 only apply to the few shot learning task, additional hyperparams are specified for bAbI, but I cannot see the training hyperparams for the experiments in sections 4.1 - 4.4. Minor correction: *As Turing Machine is finite state automata* -> *As Turing Machines are finite state automata*", "role": "assistant" } ]
[ "## Update I am changing my score to 7 (not adjusting the discretised score in the openreview interface though) Summary: The authors present the Neural Stored-program Memory (NSM), an architectural addition which is suitable for a large class of existing MANN models (NTM, DNC, LRUA etc).", "The model contains a second memory of key value pairs, which on each timestep can be read via cosine distance similarity on the keys, and for which the values contain all the weights necessary for the Interface Network (IN).", "The IN also receives the recurrent controller output, and produces an interface vector which is used to read or write to the regular memory.", "The definition is sufficiently general to allow the computation between an interface vector and the regular memory to be done according to NTM/DNC/etc schemes.", "NSM allows a MANN to effectively switch programs at every timestep, more closely matching classical computers and the Universal Turing Machine.", "The authors include a wide range of experiments, both showing faster learning on standard NTM tasks, and also introduce *Sequencing tasks* for which a single episode contains multiple instances of the standard NTM tasks in sequence.", "In principle this should allow the whole system to learn different programs for the subproblems, and dynamically switch between them as soon as the task changes.", "In principle a standard NTM could learn to do many of these tasks in a row, but the authors show that even when an NTM combined with NSM (denoted NUTM) has fewer trainable weights in total than a plain NTM, for some task sequence combinations the NUTM learns much faster.", "Experiments on continual learning, few shot learning and text based question answering back up the wide applicability of this technique, with many strong results including new SOTA on bAbI.", "Decision: Accept.", "Key reasons are that this is a relatively straightforward application of hypernetworks within a recurrent controller, which both has appealing justifications in the context of both Von Neumann and Turing models of computation.", "This simplicity is a positive, and the authors make a convincing argument that NSM can be applied to any MANN solving a sufficiently complex problem.", "A secondary reason is the extensive evaluation & hyperparam details, and while I do have some minor points on which I think the paper could be clarified (see below) I think this is overall a very nice paper.", "With the clarifications below addressed, I would give this paper a 7 but from the options I have available to choose from, 6 is the best fit for the current manuscript.", "Supporting arguments: The visualization of *program distribution* shows extremely clear phase changes as the underlying task changes - both within a single task (the reading phase vs writing phase for repeat copy) and across task boundaries (copy -> associative recall).", "Combined with the learning curves / results in the various tables, it is to me clear that the model is performing as designed.", "Despite the name of the *Meta Network*, and having a vague flavour of metalearning, the model does not require any elaborate *backprop through the inner training loop* of MAML et al, which is a benefit in my opinion.", "Additional feedback: Some of the experiments could be slightly more convincing - particularly Figure 5 which is lacking error bars.", "In my experience these architectures can have relatively high variance in performance, compared to other supervised domains, as evidenced by the spicy learning curves even in Figure 2 a).", "Error bars across multiple runs for figure 5 would be good, particularly for the points where the lines are close (eg after training on PS, the performance for C and RC is close for both models).", "The formatting of some figures and graphs could be improved: * Figure 1 - the graph could use some more text labels, rather than mathematical notation which needs to be referred to below.", "The colours are also slightly confusing - the program memory has slightly different shades of orange for the keys, slightly different shades of green for the values, whereas the regular memory has a slightly wider variety of colours.", "I was not sure at first whether this should be interpreted as indicating something important.", "Additionally, the value read from the NSM is reshaped to a 2x4 shape with various shades of blue, which then becomes the weights for a pink network?", "I think the colour adds little and may confuse people.", "There are some other issues, such as the value which should really come from the main memory as a result of the interface vector.", "With the supporting text, understanding the system is not hard, but I feel another pass over the diagram would benefit the camera ready version - consider ditching colour entirely, unless there is going to be some consistent meaning to things being the same colour vs not.", "Text labels with the various arrows (eg *vector used to lookup in NSM*, *used as network weights for Interface Network*) may improve clarity.", "should also be labelled on the diagram.", "* The y axis scale in figure 5 is very confusing - it took me several looks before I noticed that it goes from 50% to 100%, due to each number having digits in different vertical positions and different font sizes * Figure 2 y axis scale is a bit too small to easily read.", "* Figure 3: both read and write locations are shown on a single plot, but the green line that separates them is lamost unreadable on a printout.", "The task dependent, presumably manually chosen, approach to picking where this visualisation toggle should be made is a bit arbitrary - I would prefer to see read and write locations as separate subplots, as in the appendix.", "I found the exact details of number of programs versus number of heads (and the type of those heads) a bit confusing.", "In www.github.com/deepmind/dnc the code has *number of read heads* and *number of write heads* being two independently set integer parameters.", "This paper refers to * control heads*, but I am not exactly clear on how these are divded between read and write duties.", "Algorithm 1 references that write heads return zero on line 8 but not other mention of the two mutually exclusive types is made.", "The text towards the end of section 4.1 refers to the *no-read* strategy being assigned mostly to one program - this makes it sound like each program can (softly?)", "interpolate between reading or writing (or both?).", "The start of appendix B shows program distribution for read and write heads separately, but this then begs the question of what is happening in the examples for which only one program distribution is shown (eg Figure 3) - clearly we need to read and write for all of these tasks, so is one head with two programs doing both simultaneously?", "In the interests of reproducability, clarification here is essential.", "Separately, the decision of having a different program memory per control head is interesting - it*s not obvious to me why this would be necessary, surely one program memory would be sufficient as long as thre is a different (alg 1 line 5) network to choose a different program for the head?", "It would be good to see a line added to the paper justifying this choice.", "It seems like not all training hyperparams are specified in the appendix - eg the settings specified in Table 9 only apply to the few shot learning task, additional hyperparams are specified for bAbI, but I cannot see the training hyperparams for the experiments in sections 4.1 - 4.4.", "Minor correction: *As Turing Machine is finite state automata* -> *As Turing Machines are finite state automata*" ]
{ "criticism": 10, "example": 11, "importance_and_relevance": 2, "materials_and_methods": 25, "praise": 4, "presentation_and_reporting": 23, "results_and_discussion": 5, "suggestion_and_solution": 14, "total": 44 }
0.227273
0.25
0.045455
0.568182
0.090909
0.522727
0.113636
0.318182
44
1
1
1
1
1
1
1
1
2.136364
-13.030551
15.166915
Neural Stored-program Memory
Neural networks powered with external memory simulate computer behaviors. These models, which use the memory to store data for a neural controller, can learn algorithms and other complex tasks. In this paper, we introduce a new memory to store weights for the controller, analogous to the stored-program memory in modern computer architectures. The proposed model, dubbed Neural Stored-program Memory, augments current memory-augmented neural networks, creating differentiable machines that can switch programs through time, adapt to variable contexts and thus fully resemble the Universal Turing Machine or Von Neumann Architecture. A wide range of experiments demonstrate that the resulting machines not only excel in classical algorithmic problems, but also have potential for compositional, continual, few-shot learning and question-answering tasks.
= Summary A variation of Neural Turing Machines (and derived models) storing the configuration of the controller in a separate memory, which is then *softly* read during evaluation of the NTM. Experiments show moderate improvements on some simple multi-task problems. = Strong/Weak Points + The idea of generalising NTMs to *universal* TMs is interesting in itself ... - ... however, the presented solution seems to be only half-way there, as the memory used for the *program* is still separate from the memory the NUTM operates on. Hence, modifying the program itself is not possible, which UTMs can do (even though it*s never useful in practice...) - The core novelty relative to standard NTMs is that in principle, several separate programs can be stored, and that at each timestep, the *correct* one can be read. However this read mechanism is weak, and requires extra tuning with a specialized loss (Eq. (6)) ~ It remains unclear where this is leading - clearly NTMs and NUTMs (or their DNC siblings) are currently not useful for interesting tasks, and it remains unclear what is missing to get there. The current paper does not try show the way there. - The writing is oddly inconsistent, and important technical details (such as the memory read/write mechanism) are not documented. I would prefer the paper to be self-contained, to make it easier to understand the differences and commonalities between NTM memory reads and the proposed NSM mechanism. = Recommendation Overall, I don*t see clear, actionable insights in this submission, and thus believe that it will not provide great value to the ICLR audience; hence I would recommend rejecting the paper to allow the authors to clarify their writing and provide more experimental evidence of the usefulness of their contribution. = Minor Comments + Page 6: *As NUTM requires fewer training samples to converge, it generalizes better to unseen sequences that are longer than training sequences.* - I don*t understand the connecting between the first and second part of the sentence. This seems pure speculation, not a fact.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Stored-program Memory\nNeural networks powered with external memory simulate computer behaviors. These models, which use the memory to store data for a neural controller, can learn algorithms and other complex tasks. In this paper, we introduce a new memory to store weights for the controller, analogous to the stored-program memory in modern computer architectures. The proposed model, dubbed Neural Stored-program Memory, augments current memory-augmented neural networks, creating differentiable machines that can switch programs through time, adapt to variable contexts and thus fully resemble the Universal Turing Machine or Von Neumann Architecture. A wide range of experiments demonstrate that the resulting machines not only excel in classical algorithmic problems, but also have potential for compositional, continual, few-shot learning and question-answering tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n= Summary A variation of Neural Turing Machines (and derived models) storing the configuration of the controller in a separate memory, which is then *softly* read during evaluation of the NTM. Experiments show moderate improvements on some simple multi-task problems. = Strong/Weak Points + The idea of generalising NTMs to *universal* TMs is interesting in itself ... - ... however, the presented solution seems to be only half-way there, as the memory used for the *program* is still separate from the memory the NUTM operates on. Hence, modifying the program itself is not possible, which UTMs can do (even though it*s never useful in practice...) - The core novelty relative to standard NTMs is that in principle, several separate programs can be stored, and that at each timestep, the *correct* one can be read. However this read mechanism is weak, and requires extra tuning with a specialized loss (Eq. (6)) ~ It remains unclear where this is leading - clearly NTMs and NUTMs (or their DNC siblings) are currently not useful for interesting tasks, and it remains unclear what is missing to get there. The current paper does not try show the way there. - The writing is oddly inconsistent, and important technical details (such as the memory read/write mechanism) are not documented. I would prefer the paper to be self-contained, to make it easier to understand the differences and commonalities between NTM memory reads and the proposed NSM mechanism. = Recommendation Overall, I don*t see clear, actionable insights in this submission, and thus believe that it will not provide great value to the ICLR audience; hence I would recommend rejecting the paper to allow the authors to clarify their writing and provide more experimental evidence of the usefulness of their contribution. = Minor Comments + Page 6: *As NUTM requires fewer training samples to converge, it generalizes better to unseen sequences that are longer than training sequences.* - I don*t understand the connecting between the first and second part of the sentence. This seems pure speculation, not a fact.", "role": "assistant" } ]
[ "= Summary A variation of Neural Turing Machines (and derived models) storing the configuration of the controller in a separate memory, which is then *softly* read during evaluation of the NTM.", "Experiments show moderate improvements on some simple multi-task problems.", "= Strong/Weak Points + The idea of generalising NTMs to *universal* TMs is interesting in itself ... - ... however, the presented solution seems to be only half-way there, as the memory used for the *program* is still separate from the memory the NUTM operates on.", "Hence, modifying the program itself is not possible, which UTMs can do (even though it*s never useful in practice...) - The core novelty relative to standard NTMs is that in principle, several separate programs can be stored, and that at each timestep, the *correct* one can be read.", "However this read mechanism is weak, and requires extra tuning with a specialized loss (Eq.", "(6)) ~ It remains unclear where this is leading - clearly NTMs and NUTMs (or their DNC siblings) are currently not useful for interesting tasks, and it remains unclear what is missing to get there.", "The current paper does not try show the way there.", "- The writing is oddly inconsistent, and important technical details (such as the memory read/write mechanism) are not documented.", "I would prefer the paper to be self-contained, to make it easier to understand the differences and commonalities between NTM memory reads and the proposed NSM mechanism.", "= Recommendation Overall, I don*t see clear, actionable insights in this submission, and thus believe that it will not provide great value to the ICLR audience; hence I would recommend rejecting the paper to allow the authors to clarify their writing and provide more experimental evidence of the usefulness of their contribution.", "= Minor Comments + Page 6: *As NUTM requires fewer training samples to converge, it generalizes better to unseen sequences that are longer than training sequences.", "* - I don*t understand the connecting between the first and second part of the sentence.", "This seems pure speculation, not a fact." ]
{ "criticism": 8, "example": 2, "importance_and_relevance": 3, "materials_and_methods": 8, "praise": 1, "presentation_and_reporting": 5, "results_and_discussion": 0, "suggestion_and_solution": 3, "total": 13 }
0.615385
0.153846
0.230769
0.615385
0.076923
0.384615
0
0.230769
13
1
1
1
1
1
1
0
1
2.307692
2.307692
0
Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.
This paper introduces a method for generating training/test data for measuring the model*s ability of *compositional generalization* in complex compositional tasks/domains such as natural language understanding, visual understanding and other domains. The idea of the proposed method is to essentially keep the *atom* distribution unchanged between the training and test data, but maximize the divergence between the *compound* distributions between them. The authors have conducted a thorough and systematic experimentation, comparing the proposed approach with a number of other heuristic approaches for train test splits (such as random and input/output length, etc.) and using both a new large data set they generated (CFQ) and existing data set (SCAN). The experimental results verify that using their method they can obtain train test data sets with uniform atom distributions with large divergence in compound distributions, and they find that there is a surprisingly large negative correlation between the accuracy of existing state-of-the-art learning methods and the compound divergence. The data generation mechanism is systematic and involved, consisting of different categories of rules (logical form, grammar, rule application DAG*s, etc.) and it would seem that the generation method/system and the generated data would be useful as benchmark data for the community. The paper lacks technical novelty other than the training and test data generation approach, but having one available to the community with these apparently desirable characteristics as benchmark data for measuring complex, compositional generalization capabilities, and that could be invaluable to the research community.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMeasuring Compositional Generalization: A Comprehensive Method on Realistic Data\nState-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a method for generating training/test data for measuring the model*s ability of *compositional generalization* in complex compositional tasks/domains such as natural language understanding, visual understanding and other domains. The idea of the proposed method is to essentially keep the *atom* distribution unchanged between the training and test data, but maximize the divergence between the *compound* distributions between them. The authors have conducted a thorough and systematic experimentation, comparing the proposed approach with a number of other heuristic approaches for train test splits (such as random and input/output length, etc.) and using both a new large data set they generated (CFQ) and existing data set (SCAN). The experimental results verify that using their method they can obtain train test data sets with uniform atom distributions with large divergence in compound distributions, and they find that there is a surprisingly large negative correlation between the accuracy of existing state-of-the-art learning methods and the compound divergence. The data generation mechanism is systematic and involved, consisting of different categories of rules (logical form, grammar, rule application DAG*s, etc.) and it would seem that the generation method/system and the generated data would be useful as benchmark data for the community. The paper lacks technical novelty other than the training and test data generation approach, but having one available to the community with these apparently desirable characteristics as benchmark data for measuring complex, compositional generalization capabilities, and that could be invaluable to the research community.", "role": "assistant" } ]
[ "This paper introduces a method for generating training/test data for measuring the model*s ability of *compositional generalization* in complex compositional tasks/domains such as natural language understanding, visual understanding and other domains.", "The idea of the proposed method is to essentially keep the *atom* distribution unchanged between the training and test data, but maximize the divergence between the *compound* distributions between them.", "The authors have conducted a thorough and systematic experimentation, comparing the proposed approach with a number of other heuristic approaches for train test splits (such as random and input/output length, etc.)", "and using both a new large data set they generated (CFQ) and existing data set (SCAN).", "The experimental results verify that using their method they can obtain train test data sets with uniform atom distributions with large divergence in compound distributions, and they find that there is a surprisingly large negative correlation between the accuracy of existing state-of-the-art learning methods and the compound divergence.", "The data generation mechanism is systematic and involved, consisting of different categories of rules (logical form, grammar, rule application DAG*s, etc.)", "and it would seem that the generation method/system and the generated data would be useful as benchmark data for the community.", "The paper lacks technical novelty other than the training and test data generation approach, but having one available to the community with these apparently desirable characteristics as benchmark data for measuring complex, compositional generalization capabilities, and that could be invaluable to the research community." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 8, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 8 }
0.125
0
0.25
1
0.25
0.125
0.25
0.125
8
1
0
1
1
1
1
1
1
2.125
1.730439
0.394561
Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification
Semmelhack et al. (2014) have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine (SVM). Convolutional Neural Networks (CNNs) have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box. Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts. Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions. We find that our CNN makes predictions by analyzing the steadiness of the tail*s trunk, which markedly differs from the manually extracted features used by Semmelhack et al. (2014). We further uncovered that the network paid attention to experimental artifacts. Removing these artifacts ensured the validity of predictions. After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%. Our work thus demonstrates the utility of AI explainability for CNNs.
The paper presents a case study of training a video classifier using convolutional networks, and on how the learned features related to previous, hand-designed ones. The particular domain considered is of importance for biologists/medical doctors/neuroscientists: zebra fish swim bout classification. In order to identify which particular features the neural networks are paying attention to, the paper used Deep Taylor Decomposition, which allowed the authors to identify *clever-hans*-type phenomena (network attending to meaningless experimental setup differences that actually gave a way the ground truth classes). This allowed for the authors to mask out such features and make the network attend to more meaningful ones. In particular observations like * looking for salient features in the trunk of the tail while largely disregarding the tip*, are typically absent from most deep learning studies and it*s quite interesting. Overall the paper is well written, the experiments are well designed; everything seems very rigorous and well executed. It makes for a very good quality practitioner-level case study on video understanding, which may also be useful for people studying zebra fish or related simple life forms. My main concern with the paper is whether ICLR is an appropriate venue, as it does not provide pure machine learning contributions in the form of new techniques of generally applicable insights.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAnalysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification\nSemmelhack et al. (2014) have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine (SVM). Convolutional Neural Networks (CNNs) have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box. Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts. Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions. We find that our CNN makes predictions by analyzing the steadiness of the tail*s trunk, which markedly differs from the manually extracted features used by Semmelhack et al. (2014). We further uncovered that the network paid attention to experimental artifacts. Removing these artifacts ensured the validity of predictions. After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%. Our work thus demonstrates the utility of AI explainability for CNNs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents a case study of training a video classifier using convolutional networks, and on how the learned features related to previous, hand-designed ones. The particular domain considered is of importance for biologists/medical doctors/neuroscientists: zebra fish swim bout classification. In order to identify which particular features the neural networks are paying attention to, the paper used Deep Taylor Decomposition, which allowed the authors to identify *clever-hans*-type phenomena (network attending to meaningless experimental setup differences that actually gave a way the ground truth classes). This allowed for the authors to mask out such features and make the network attend to more meaningful ones. In particular observations like * looking for salient features in the trunk of the tail while largely disregarding the tip*, are typically absent from most deep learning studies and it*s quite interesting. Overall the paper is well written, the experiments are well designed; everything seems very rigorous and well executed. It makes for a very good quality practitioner-level case study on video understanding, which may also be useful for people studying zebra fish or related simple life forms. My main concern with the paper is whether ICLR is an appropriate venue, as it does not provide pure machine learning contributions in the form of new techniques of generally applicable insights.", "role": "assistant" } ]
[ "The paper presents a case study of training a video classifier using convolutional networks, and on how the learned features related to previous, hand-designed ones.", "The particular domain considered is of importance for biologists/medical doctors/neuroscientists: zebra fish swim bout classification.", "In order to identify which particular features the neural networks are paying attention to, the paper used Deep Taylor Decomposition, which allowed the authors to identify *clever-hans*-type phenomena (network attending to meaningless experimental setup differences that actually gave a way the ground truth classes).", "This allowed for the authors to mask out such features and make the network attend to more meaningful ones.", "In particular observations like * looking for salient features in the trunk of the tail while largely disregarding the tip*, are typically absent from most deep learning studies and it*s quite interesting.", "Overall the paper is well written, the experiments are well designed; everything seems very rigorous and well executed.", "It makes for a very good quality practitioner-level case study on video understanding, which may also be useful for people studying zebra fish or related simple life forms.", "My main concern with the paper is whether ICLR is an appropriate venue, as it does not provide pure machine learning contributions in the form of new techniques of generally applicable insights." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 5, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 8 }
0.125
0
0.5
0.625
0.5
0.125
0.125
0.125
8
1
0
1
1
1
1
1
1
2.125
1.730439
0.394561
Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards
Imitation learning allows agents to learn complex behaviors from demonstrations. However, learning a complex vision-based task may require an impractical number of demonstrations. Meta-imitation learning is a promising approach towards enabling agents to learn a new task from one or a few demonstrations by leveraging experience from learning similar tasks. In the presence of task ambiguity or unobserved dynamics, demonstrations alone may not provide enough information; an agent must also try the task to successfully infer a policy. In this work, we propose a method that can learn to learn from both demonstrations and trial-and-error experience with sparse reward feedback. In comparison to meta-imitation, this approach enables the agent to effectively and efficiently improve itself autonomously beyond the demonstration data. In comparison to meta-reinforcement learning, we can scale to substantially broader distributions of tasks, as the demonstration reduces the burden of exploration. Our experiments show that our method significantly outperforms prior approaches on a set of challenging, vision-based control tasks.
The paper proposes an approach for combining meta-imitation learning and learning from trial-and-error with sparse reward feedback. The paper is well-written and the experiments are convincing. I found the idea of having separate networks for the two phases of the algorithm (instead of recollecting on-policy trial trajectories) interesting and possibly applicable in other similar settings. The paper has a comprehensive analysis of the advantages of the method over other reasonable baselines which do not have the trial-and-error element. However, in terms of the limitation of the method, the paper only has a small failure analysis section in the appendix; adding a more detailed discussion on the limitations can improve the impact of the paper further.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWatch, Try, Learn: Meta-Learning from Demonstrations and Rewards\nImitation learning allows agents to learn complex behaviors from demonstrations. However, learning a complex vision-based task may require an impractical number of demonstrations. Meta-imitation learning is a promising approach towards enabling agents to learn a new task from one or a few demonstrations by leveraging experience from learning similar tasks. In the presence of task ambiguity or unobserved dynamics, demonstrations alone may not provide enough information; an agent must also try the task to successfully infer a policy. In this work, we propose a method that can learn to learn from both demonstrations and trial-and-error experience with sparse reward feedback. In comparison to meta-imitation, this approach enables the agent to effectively and efficiently improve itself autonomously beyond the demonstration data. In comparison to meta-reinforcement learning, we can scale to substantially broader distributions of tasks, as the demonstration reduces the burden of exploration. Our experiments show that our method significantly outperforms prior approaches on a set of challenging, vision-based control tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes an approach for combining meta-imitation learning and learning from trial-and-error with sparse reward feedback. The paper is well-written and the experiments are convincing. I found the idea of having separate networks for the two phases of the algorithm (instead of recollecting on-policy trial trajectories) interesting and possibly applicable in other similar settings. The paper has a comprehensive analysis of the advantages of the method over other reasonable baselines which do not have the trial-and-error element. However, in terms of the limitation of the method, the paper only has a small failure analysis section in the appendix; adding a more detailed discussion on the limitations can improve the impact of the paper further.", "role": "assistant" } ]
[ "The paper proposes an approach for combining meta-imitation learning and learning from trial-and-error with sparse reward feedback.", "The paper is well-written and the experiments are convincing.", "I found the idea of having separate networks for the two phases of the algorithm (instead of recollecting on-policy trial trajectories) interesting and possibly applicable in other similar settings.", "The paper has a comprehensive analysis of the advantages of the method over other reasonable baselines which do not have the trial-and-error element.", "However, in terms of the limitation of the method, the paper only has a small failure analysis section in the appendix; adding a more detailed discussion on the limitations can improve the impact of the paper further." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 5 }
0
0
0
1
0.6
0.2
0.4
0.2
5
0
0
0
1
1
1
1
1
2.4
1.389924
1.010076
Accelerating SGD with momentum for over-parameterized learning
Nesterov SGD is widely used for training modern neural networks and other machine learning models. Yet, its advantages over SGD have not been theoretically clarified. Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD. Furthermore, Nesterov SGD may diverge for step sizes that ensure convergence of ordinary SGD. This is in contrast to the classical results in the deterministic setting, where the same step size ensures accelerated convergence of the Nesterov*s method over optimal gradient descent. To address the non-acceleration issue, we introduce a compensation term to Nesterov SGD. The resulting algorithm, which we call MaSS, converges for same step sizes as SGD. We prove that MaSS obtains an accelerated convergence rates over SGD for any mini-batch size in the linear setting. For full batch, the convergence rate of MaSS matches the well-known accelerated rate of the Nesterov*s method. We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters on the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation. Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over SGD, Nesterov SGD and Adam.
The authors present a new first order optimization method that adds a corrective term to Nesterov SGD. They demonstrate that this adjustment is necessary and sufficient to benefit from the faster convergence of Nesterov gradient descent in the stochastic case. In the full-batch (non deterministic) setting, their algorithm boils down to the classical formulation of Nesterov GD. Their approach is justified by a well conducted theoretical analysis and some empirical work on toy datasets. Positive points: - The approach is elegant and thoroughly justified. The convergence to Nesterov GD when the batch size increase is comforting. - The empirical evaluation, even if it is still preliminary and larger scale experiments will have to be conducted before the method could be widely adopted, are suitable and convincing. - Some interesting observations regarding the convergence regimes (in respect to the batch size) are made. It would have been interesting to see how the results from fig3 generalize to the non convex problems considered in the paper. Possible improvements: - In H2, it is mentioned that the algorithm is restarted (the momentum is reset) when the learning rate is annealed. Was this also done for SGD+nesterov? Also, I think it is an important implementation detail that should be mentioned outside of the appendix - Adam didn’t get the same hyper-parameter tuning as MaSS did. It is a bit disappointing, as I think the superior performance (in generalization) of non-adaptive methods would still hold and the experiment would have been more convincing. Rate of convergence is also not reported for Adam in fig 5. I think this is definitely a good paper that should be accepted. I’m looking forward to see how it performs on non-toy models and if the community adopt it.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAccelerating SGD with momentum for over-parameterized learning\nNesterov SGD is widely used for training modern neural networks and other machine learning models. Yet, its advantages over SGD have not been theoretically clarified. Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD. Furthermore, Nesterov SGD may diverge for step sizes that ensure convergence of ordinary SGD. This is in contrast to the classical results in the deterministic setting, where the same step size ensures accelerated convergence of the Nesterov*s method over optimal gradient descent. To address the non-acceleration issue, we introduce a compensation term to Nesterov SGD. The resulting algorithm, which we call MaSS, converges for same step sizes as SGD. We prove that MaSS obtains an accelerated convergence rates over SGD for any mini-batch size in the linear setting. For full batch, the convergence rate of MaSS matches the well-known accelerated rate of the Nesterov*s method. We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters on the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation. Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over SGD, Nesterov SGD and Adam.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present a new first order optimization method that adds a corrective term to Nesterov SGD. They demonstrate that this adjustment is necessary and sufficient to benefit from the faster convergence of Nesterov gradient descent in the stochastic case. In the full-batch (non deterministic) setting, their algorithm boils down to the classical formulation of Nesterov GD. Their approach is justified by a well conducted theoretical analysis and some empirical work on toy datasets. Positive points: - The approach is elegant and thoroughly justified. The convergence to Nesterov GD when the batch size increase is comforting. - The empirical evaluation, even if it is still preliminary and larger scale experiments will have to be conducted before the method could be widely adopted, are suitable and convincing. - Some interesting observations regarding the convergence regimes (in respect to the batch size) are made. It would have been interesting to see how the results from fig3 generalize to the non convex problems considered in the paper. Possible improvements: - In H2, it is mentioned that the algorithm is restarted (the momentum is reset) when the learning rate is annealed. Was this also done for SGD+nesterov? Also, I think it is an important implementation detail that should be mentioned outside of the appendix - Adam didn’t get the same hyper-parameter tuning as MaSS did. It is a bit disappointing, as I think the superior performance (in generalization) of non-adaptive methods would still hold and the experiment would have been more convincing. Rate of convergence is also not reported for Adam in fig 5. I think this is definitely a good paper that should be accepted. I’m looking forward to see how it performs on non-toy models and if the community adopt it.", "role": "assistant" } ]
[ "The authors present a new first order optimization method that adds a corrective term to Nesterov SGD.", "They demonstrate that this adjustment is necessary and sufficient to benefit from the faster convergence of Nesterov gradient descent in the stochastic case.", "In the full-batch (non deterministic) setting, their algorithm boils down to the classical formulation of Nesterov GD.", "Their approach is justified by a well conducted theoretical analysis and some empirical work on toy datasets.", "Positive points: - The approach is elegant and thoroughly justified.", "The convergence to Nesterov GD when the batch size increase is comforting.", "- The empirical evaluation, even if it is still preliminary and larger scale experiments will have to be conducted before the method could be widely adopted, are suitable and convincing.", "- Some interesting observations regarding the convergence regimes (in respect to the batch size) are made.", "It would have been interesting to see how the results from fig3 generalize to the non convex problems considered in the paper.", "Possible improvements: - In H2, it is mentioned that the algorithm is restarted (the momentum is reset) when the learning rate is annealed.", "Was this also done for SGD+nesterov?", "Also, I think it is an important implementation detail that should be mentioned outside of the appendix - Adam didn’t get the same hyper-parameter tuning as MaSS did.", "It is a bit disappointing, as I think the superior performance (in generalization) of non-adaptive methods would still hold and the experiment would have been more convincing.", "Rate of convergence is also not reported for Adam in fig 5.", "I think this is definitely a good paper that should be accepted.", "I’m looking forward to see how it performs on non-toy models and if the community adopt it." ]
{ "criticism": 2, "example": 2, "importance_and_relevance": 3, "materials_and_methods": 13, "praise": 5, "presentation_and_reporting": 2, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 16 }
0.125
0.125
0.1875
0.8125
0.3125
0.125
0.1875
0.25
16
1
1
1
1
1
1
1
1
2.125
1.982958
0.142042
Disentangling Factors of Variations Using Few Labels
Learning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a limited amount of supervision, for example through manual labeling of (some) factors of variation in a few training examples. In this paper, we investigate the impact of such supervision on state-of-the-art disentanglement methods and perform a large scale study, training over 52000 models under well-defined and reproducible experimental conditions. We observe that a small number of labeled examples (0.01--0.5% of the data set), with potentially imprecise and incomplete labels, is sufficient to perform model selection on state-of-the-art unsupervised models. Further, we investigate the benefit of incorporating supervision into the training process. Overall, we empirically validate that with little and imprecise supervision it is possible to reliably learn disentangled representations.
The paper presents evidence that even a tiny bit of supervision over the factors of variation in a dataset presented in the form of semi-supervised training labels or for unsupervised model selection, can result in models that learn disentangled representations. The authors perform a thorough sweep over multiple datasets, different models classes and ways to provide labeled information. Overall, this work is a well executed and rigorous empirical study on the state of disentangled representation learning. I think experimental protocol and models trained models will prove extremely useful for future work and advocate for accepting this paper. Comments 1) Would it be possible to use the few labeled factors of variation in a meta-learning setup rather than as a regularizer? 2) The paper provides high level conclusions about the impact of having supervised model selection or semi-supervised learning in models in general, but doesn’t offer much discussion into their behavior under specific settings (i.e.) it seems to be hard to pick a winner amongst presented model. Some are better with 100 labeled examples but don’t scale as well as others when an order of magnitude more labeled data is available. It is certainly hard to discuss all the thousands of experimental observations, but the paper can benefit from some more fine-grained analysis. Minor Figure 4 is hard to comprehend without a model to index mapping similar to Figure 3
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDisentangling Factors of Variations Using Few Labels\nLearning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a limited amount of supervision, for example through manual labeling of (some) factors of variation in a few training examples. In this paper, we investigate the impact of such supervision on state-of-the-art disentanglement methods and perform a large scale study, training over 52000 models under well-defined and reproducible experimental conditions. We observe that a small number of labeled examples (0.01--0.5% of the data set), with potentially imprecise and incomplete labels, is sufficient to perform model selection on state-of-the-art unsupervised models. Further, we investigate the benefit of incorporating supervision into the training process. Overall, we empirically validate that with little and imprecise supervision it is possible to reliably learn disentangled representations.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents evidence that even a tiny bit of supervision over the factors of variation in a dataset presented in the form of semi-supervised training labels or for unsupervised model selection, can result in models that learn disentangled representations. The authors perform a thorough sweep over multiple datasets, different models classes and ways to provide labeled information. Overall, this work is a well executed and rigorous empirical study on the state of disentangled representation learning. I think experimental protocol and models trained models will prove extremely useful for future work and advocate for accepting this paper. Comments 1) Would it be possible to use the few labeled factors of variation in a meta-learning setup rather than as a regularizer? 2) The paper provides high level conclusions about the impact of having supervised model selection or semi-supervised learning in models in general, but doesn’t offer much discussion into their behavior under specific settings (i.e.) it seems to be hard to pick a winner amongst presented model. Some are better with 100 labeled examples but don’t scale as well as others when an order of magnitude more labeled data is available. It is certainly hard to discuss all the thousands of experimental observations, but the paper can benefit from some more fine-grained analysis. Minor Figure 4 is hard to comprehend without a model to index mapping similar to Figure 3", "role": "assistant" } ]
[ "The paper presents evidence that even a tiny bit of supervision over the factors of variation in a dataset presented in the form of semi-supervised training labels or for unsupervised model selection, can result in models that learn disentangled representations.", "The authors perform a thorough sweep over multiple datasets, different models classes and ways to provide labeled information.", "Overall, this work is a well executed and rigorous empirical study on the state of disentangled representation learning.", "I think experimental protocol and models trained models will prove extremely useful for future work and advocate for accepting this paper.", "Comments 1) Would it be possible to use the few labeled factors of variation in a meta-learning setup rather than as a regularizer?", "2) The paper provides high level conclusions about the impact of having supervised model selection or semi-supervised learning in models in general, but doesn’t offer much discussion into their behavior under specific settings (i.e.)", "it seems to be hard to pick a winner amongst presented model.", "Some are better with 100 labeled examples but don’t scale as well as others when an order of magnitude more labeled data is available.", "It is certainly hard to discuss all the thousands of experimental observations, but the paper can benefit from some more fine-grained analysis.", "Minor Figure 4 is hard to comprehend without a model to index mapping similar to Figure 3" ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 8, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 10 }
0.3
0.1
0.2
0.8
0.2
0.3
0.2
0.2
10
1
1
1
1
1
1
1
1
2.3
2.157958
0.142042
Disentangling Factors of Variations Using Few Labels
Learning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a limited amount of supervision, for example through manual labeling of (some) factors of variation in a few training examples. In this paper, we investigate the impact of such supervision on state-of-the-art disentanglement methods and perform a large scale study, training over 52000 models under well-defined and reproducible experimental conditions. We observe that a small number of labeled examples (0.01--0.5% of the data set), with potentially imprecise and incomplete labels, is sufficient to perform model selection on state-of-the-art unsupervised models. Further, we investigate the benefit of incorporating supervision into the training process. Overall, we empirically validate that with little and imprecise supervision it is possible to reliably learn disentangled representations.
After rebuttal edit: No clarifications were made, so I keep my score as is. ------------------------------------------------------ Claims: Explicitly requiring a small number of labels allows for successful learning of disentangled features by either using them as a validation set for hyper parameter tuning, or using them as a supervised loss. Decision: Reject. This paper needs a substantial rewrite to make clear what specific contributions are from the multitude of experiments run in this study. As is, the two contributions stated in the introduction are both obvious and not particularly significant -- that having some labels of the type of disentanglement desired helps when used as a validation set and as a small number of labels for learning a disentangled representation space. There are no obviously stated conclusions about which types of labels are better than others (4.2). Section 3.2 seems to have some interesting findings that small scale supervision can help significantly and fine-grained labeling is not necessarily needed, but I don*t understand why that finding is presented there when Fig. 4 seems to perform a similar experiment on types of labels with no conclusion based on its results. Conclusion sentence of 4.3 is hard to decipher, but I assume is just saying S^2/S beats U/S even when S^2/S is subject to noisy labels. Overall, I find it very difficult to absorb the huge amount of results and find the analysis not well presented.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDisentangling Factors of Variations Using Few Labels\nLearning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a limited amount of supervision, for example through manual labeling of (some) factors of variation in a few training examples. In this paper, we investigate the impact of such supervision on state-of-the-art disentanglement methods and perform a large scale study, training over 52000 models under well-defined and reproducible experimental conditions. We observe that a small number of labeled examples (0.01--0.5% of the data set), with potentially imprecise and incomplete labels, is sufficient to perform model selection on state-of-the-art unsupervised models. Further, we investigate the benefit of incorporating supervision into the training process. Overall, we empirically validate that with little and imprecise supervision it is possible to reliably learn disentangled representations.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nAfter rebuttal edit: No clarifications were made, so I keep my score as is. ------------------------------------------------------ Claims: Explicitly requiring a small number of labels allows for successful learning of disentangled features by either using them as a validation set for hyper parameter tuning, or using them as a supervised loss. Decision: Reject. This paper needs a substantial rewrite to make clear what specific contributions are from the multitude of experiments run in this study. As is, the two contributions stated in the introduction are both obvious and not particularly significant -- that having some labels of the type of disentanglement desired helps when used as a validation set and as a small number of labels for learning a disentangled representation space. There are no obviously stated conclusions about which types of labels are better than others (4.2). Section 3.2 seems to have some interesting findings that small scale supervision can help significantly and fine-grained labeling is not necessarily needed, but I don*t understand why that finding is presented there when Fig. 4 seems to perform a similar experiment on types of labels with no conclusion based on its results. Conclusion sentence of 4.3 is hard to decipher, but I assume is just saying S^2/S beats U/S even when S^2/S is subject to noisy labels. Overall, I find it very difficult to absorb the huge amount of results and find the analysis not well presented.", "role": "assistant" } ]
[ "After rebuttal edit: No clarifications were made, so I keep my score as is.", "------------------------------------------------------ Claims: Explicitly requiring a small number of labels allows for successful learning of disentangled features by either using them as a validation set for hyper parameter tuning, or using them as a supervised loss.", "Decision: Reject.", "This paper needs a substantial rewrite to make clear what specific contributions are from the multitude of experiments run in this study.", "As is, the two contributions stated in the introduction are both obvious and not particularly significant -- that having some labels of the type of disentanglement desired helps when used as a validation set and as a small number of labels for learning a disentangled representation space.", "There are no obviously stated conclusions about which types of labels are better than others (4.2).", "Section 3.2 seems to have some interesting findings that small scale supervision can help significantly and fine-grained labeling is not necessarily needed, but I don*t understand why that finding is presented there when Fig.", "4 seems to perform a similar experiment on types of labels with no conclusion based on its results.", "Conclusion sentence of 4.3 is hard to decipher, but I assume is just saying S^2/S beats U/S even when S^2/S is subject to noisy labels.", "Overall, I find it very difficult to absorb the huge amount of results and find the analysis not well presented." ]
{ "criticism": 4, "example": 2, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 6, "results_and_discussion": 4, "suggestion_and_solution": 1, "total": 10 }
0.4
0.2
0.2
0.6
0.1
0.6
0.4
0.1
10
1
1
1
1
1
1
1
1
2.6
2.457958
0.142042
Input Complexity and Out-of-distribution Detection with Likelihood-based Generative Models
Likelihood-based generative models are a promising resource to detect out-of-distribution (OOD) inputs which could compromise the robustness or reliability of a machine learning system. However, likelihoods derived from such models have been shown to be problematic for detecting certain types of inputs that significantly differ from training data. In this paper, we pose that this problem is due to the excessive influence that input complexity has in generative models* likelihoods. We report a set of experiments supporting this hypothesis, and use an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio test akin to Bayesian model comparison. We find such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, and complexity estimates.
This paper analyzes the peculiar case that deep generative models often assign a higher likelihood to other datasets than they were trained on. The running hypothesis here is, that input complexity plays a central role. Measuring a proxy for input complexity shows that it is tightly anticorrelated with likelihood and therefore seems to describe the trend well. A new OOD detection score is introduced based on these insights. The failure of Cifar-10 generative models to detect SVHN as OOD via thresholded likelihood is an interesting question. I also think that the conclusion of this work, that this behavior may be natural and a consequence of likelihood itself, seems very sensible. The overall message of the paper is very important and might help to avoid researchers wasting time on improving likelihood models for OOD detection. Instead, it seems a more sensible way to deal with this problem is to consider likelihood ratios. One concern is that a formal discussion if the likelihood is supposed to behave the way the authors describe, or if a better model might solve this eventually. I think it is already mentioned that empirically there is no reason to believe a better model would solve this, it would just be nice to have a theoretical statement here as well. Another concern is that even though the empirical results look quite promising, it would be good to stress-test the proposed score on more common OOD detection benchmarks against state-of-the-art methods to see if likelihood generative models with the proposed criterion are competitive there. All in all, I think this is a great paper, additional theoretical analysis and stronger empirical results would be helpful to increase the significance of the work. -------------- Post Rebuttal -------------- I would like to thank the authors for their interesting response. However, I will not change my score as the response did not change my opinion on the need for a more theoretical treatment of the statement, which is a serious (and potentially hard to resolve) shortcoming of the paper in my opinion.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInput Complexity and Out-of-distribution Detection with Likelihood-based Generative Models\nLikelihood-based generative models are a promising resource to detect out-of-distribution (OOD) inputs which could compromise the robustness or reliability of a machine learning system. However, likelihoods derived from such models have been shown to be problematic for detecting certain types of inputs that significantly differ from training data. In this paper, we pose that this problem is due to the excessive influence that input complexity has in generative models* likelihoods. We report a set of experiments supporting this hypothesis, and use an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio test akin to Bayesian model comparison. We find such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, and complexity estimates.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper analyzes the peculiar case that deep generative models often assign a higher likelihood to other datasets than they were trained on. The running hypothesis here is, that input complexity plays a central role. Measuring a proxy for input complexity shows that it is tightly anticorrelated with likelihood and therefore seems to describe the trend well. A new OOD detection score is introduced based on these insights. The failure of Cifar-10 generative models to detect SVHN as OOD via thresholded likelihood is an interesting question. I also think that the conclusion of this work, that this behavior may be natural and a consequence of likelihood itself, seems very sensible. The overall message of the paper is very important and might help to avoid researchers wasting time on improving likelihood models for OOD detection. Instead, it seems a more sensible way to deal with this problem is to consider likelihood ratios. One concern is that a formal discussion if the likelihood is supposed to behave the way the authors describe, or if a better model might solve this eventually. I think it is already mentioned that empirically there is no reason to believe a better model would solve this, it would just be nice to have a theoretical statement here as well. Another concern is that even though the empirical results look quite promising, it would be good to stress-test the proposed score on more common OOD detection benchmarks against state-of-the-art methods to see if likelihood generative models with the proposed criterion are competitive there. All in all, I think this is a great paper, additional theoretical analysis and stronger empirical results would be helpful to increase the significance of the work. -------------- Post Rebuttal -------------- I would like to thank the authors for their interesting response. However, I will not change my score as the response did not change my opinion on the need for a more theoretical treatment of the statement, which is a serious (and potentially hard to resolve) shortcoming of the paper in my opinion.", "role": "assistant" } ]
[ "This paper analyzes the peculiar case that deep generative models often assign a higher likelihood to other datasets than they were trained on.", "The running hypothesis here is, that input complexity plays a central role.", "Measuring a proxy for input complexity shows that it is tightly anticorrelated with likelihood and therefore seems to describe the trend well.", "A new OOD detection score is introduced based on these insights.", "The failure of Cifar-10 generative models to detect SVHN as OOD via thresholded likelihood is an interesting question.", "I also think that the conclusion of this work, that this behavior may be natural and a consequence of likelihood itself, seems very sensible.", "The overall message of the paper is very important and might help to avoid researchers wasting time on improving likelihood models for OOD detection.", "Instead, it seems a more sensible way to deal with this problem is to consider likelihood ratios.", "One concern is that a formal discussion if the likelihood is supposed to behave the way the authors describe, or if a better model might solve this eventually.", "I think it is already mentioned that empirically there is no reason to believe a better model would solve this, it would just be nice to have a theoretical statement here as well.", "Another concern is that even though the empirical results look quite promising, it would be good to stress-test the proposed score on more common OOD detection benchmarks against state-of-the-art methods to see if likelihood generative models with the proposed criterion are competitive there.", "All in all, I think this is a great paper, additional theoretical analysis and stronger empirical results would be helpful to increase the significance of the work.", "-------------- Post Rebuttal -------------- I would like to thank the authors for their interesting response.", "However, I will not change my score as the response did not change my opinion on the need for a more theoretical treatment of the statement, which is a serious (and potentially hard to resolve) shortcoming of the paper in my opinion." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 6, "materials_and_methods": 11, "praise": 5, "presentation_and_reporting": 0, "results_and_discussion": 9, "suggestion_and_solution": 5, "total": 14 }
0.071429
0
0.428571
0.785714
0.357143
0
0.642857
0.357143
14
1
0
1
1
1
0
1
1
2.642857
2.627075
0.015782
Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling
Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently. However, learning from demonstrations often suffers from the covariate shift problem, which results in cascading errors of the learned policy. We introduce a notion of conservatively extrapolated value functions, which provably lead to policies with self-correction. We design an algorithm Value Iteration with Negative Sampling (VINS) that practically learns such value functions with conservative extrapolation. We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks. We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency.
This work presents the value iteration with negative sampling (VINS) algorithm, a method for accelerating reinforcement learning using expert demonstrations. In addition to learning an expert policy through behavioral cloning, VINS learns an initial value function which is biased to assign smaller expected values to states not encountered during demonstrations. This is done by augmenting the demonstration data with states that have been randomly perturbed, and penalizing the value targets for these states by a factor proportional to their Euclidean distance to the original state. In addition to the policy and value function, VINS also learns a one-step dynamics model used to select actions against the learned value function. As the value function learned in VINS is only defined with respect to the current state, action values are estimated by sampling future states using the learned model, and computing the value of these sampled states. Empirical results on a set of robotic control tasks demonstrate that VINS requires significantly less interaction with the environment to learn a good policy than existing, state of the art approaches given the same set of demonstrations. While the paper presents a novel and highly effective approach, there are some apparent limitations to the algorithms which should be highlighted, and there is room for improvement in the empirical evaluations. It is unclear that VINS would generalize well beyond robotic control domains. For one, its theoretical guarantees depend on the local reversibility of the dynamics, that is, for small deviations from the desired state, it is possible to return to that state in a single step. This isn*t too significant a restriction, as the ability to recover quickly from small mistakes would seem to be a necessary for any method to be able to provide similar guarantees about its behavior. The bigger issue is the use of the Euclidean metric (or any fixed metric) in the definition of conservative extrapolation. Basically, a state is said to be similar to the states observed during the demonstrations if it is close, under the Euclidean metric, to at least one demonstrated state. This is a reasonable approach in robotic control tasks, where Euclidean distance is a good measure of how similar two configurations are to one another, but it would seem to be unsuitable for domains where the observation space consists of images or other high-dimensional representations. In those cases, a useful notion of similarity would likely have to be learned from the data. In such domains, one might imagine that the conservative value function would simply learn to distinguish between real observations, and those that have been perturbed by random noise, which would never be observed in the actual task. While experimental results demonstrate a very significant advantage for VINS both in terms of sample complexity and final performance, results are presented for only two tasks, *pick-and-place* and *push*, while VINS outperforms the alternatives on these tasks, it is worth noting that its initial performance (without additional environment interact) is not dramatically superior to pure behavioral cloning. It would be helpful to see how well VINS compares against the alternatives for a much smaller number of demonstrations, say 5-20, a regime where we would expect initial performance to be poor.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling\nImitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently. However, learning from demonstrations often suffers from the covariate shift problem, which results in cascading errors of the learned policy. We introduce a notion of conservatively extrapolated value functions, which provably lead to policies with self-correction. We design an algorithm Value Iteration with Negative Sampling (VINS) that practically learns such value functions with conservative extrapolation. We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks. We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work presents the value iteration with negative sampling (VINS) algorithm, a method for accelerating reinforcement learning using expert demonstrations. In addition to learning an expert policy through behavioral cloning, VINS learns an initial value function which is biased to assign smaller expected values to states not encountered during demonstrations. This is done by augmenting the demonstration data with states that have been randomly perturbed, and penalizing the value targets for these states by a factor proportional to their Euclidean distance to the original state. In addition to the policy and value function, VINS also learns a one-step dynamics model used to select actions against the learned value function. As the value function learned in VINS is only defined with respect to the current state, action values are estimated by sampling future states using the learned model, and computing the value of these sampled states. Empirical results on a set of robotic control tasks demonstrate that VINS requires significantly less interaction with the environment to learn a good policy than existing, state of the art approaches given the same set of demonstrations. While the paper presents a novel and highly effective approach, there are some apparent limitations to the algorithms which should be highlighted, and there is room for improvement in the empirical evaluations. It is unclear that VINS would generalize well beyond robotic control domains. For one, its theoretical guarantees depend on the local reversibility of the dynamics, that is, for small deviations from the desired state, it is possible to return to that state in a single step. This isn*t too significant a restriction, as the ability to recover quickly from small mistakes would seem to be a necessary for any method to be able to provide similar guarantees about its behavior. The bigger issue is the use of the Euclidean metric (or any fixed metric) in the definition of conservative extrapolation. Basically, a state is said to be similar to the states observed during the demonstrations if it is close, under the Euclidean metric, to at least one demonstrated state. This is a reasonable approach in robotic control tasks, where Euclidean distance is a good measure of how similar two configurations are to one another, but it would seem to be unsuitable for domains where the observation space consists of images or other high-dimensional representations. In those cases, a useful notion of similarity would likely have to be learned from the data. In such domains, one might imagine that the conservative value function would simply learn to distinguish between real observations, and those that have been perturbed by random noise, which would never be observed in the actual task. While experimental results demonstrate a very significant advantage for VINS both in terms of sample complexity and final performance, results are presented for only two tasks, *pick-and-place* and *push*, while VINS outperforms the alternatives on these tasks, it is worth noting that its initial performance (without additional environment interact) is not dramatically superior to pure behavioral cloning. It would be helpful to see how well VINS compares against the alternatives for a much smaller number of demonstrations, say 5-20, a regime where we would expect initial performance to be poor.", "role": "assistant" } ]
[ "This work presents the value iteration with negative sampling (VINS) algorithm, a method for accelerating reinforcement learning using expert demonstrations.", "In addition to learning an expert policy through behavioral cloning, VINS learns an initial value function which is biased to assign smaller expected values to states not encountered during demonstrations.", "This is done by augmenting the demonstration data with states that have been randomly perturbed, and penalizing the value targets for these states by a factor proportional to their Euclidean distance to the original state.", "In addition to the policy and value function, VINS also learns a one-step dynamics model used to select actions against the learned value function.", "As the value function learned in VINS is only defined with respect to the current state, action values are estimated by sampling future states using the learned model, and computing the value of these sampled states.", "Empirical results on a set of robotic control tasks demonstrate that VINS requires significantly less interaction with the environment to learn a good policy than existing, state of the art approaches given the same set of demonstrations.", "While the paper presents a novel and highly effective approach, there are some apparent limitations to the algorithms which should be highlighted, and there is room for improvement in the empirical evaluations.", "It is unclear that VINS would generalize well beyond robotic control domains.", "For one, its theoretical guarantees depend on the local reversibility of the dynamics, that is, for small deviations from the desired state, it is possible to return to that state in a single step.", "This isn*t too significant a restriction, as the ability to recover quickly from small mistakes would seem to be a necessary for any method to be able to provide similar guarantees about its behavior.", "The bigger issue is the use of the Euclidean metric (or any fixed metric) in the definition of conservative extrapolation.", "Basically, a state is said to be similar to the states observed during the demonstrations if it is close, under the Euclidean metric, to at least one demonstrated state.", "This is a reasonable approach in robotic control tasks, where Euclidean distance is a good measure of how similar two configurations are to one another, but it would seem to be unsuitable for domains where the observation space consists of images or other high-dimensional representations.", "In those cases, a useful notion of similarity would likely have to be learned from the data.", "In such domains, one might imagine that the conservative value function would simply learn to distinguish between real observations, and those that have been perturbed by random noise, which would never be observed in the actual task.", "While experimental results demonstrate a very significant advantage for VINS both in terms of sample complexity and final performance, results are presented for only two tasks, *pick-and-place* and *push*, while VINS outperforms the alternatives on these tasks, it is worth noting that its initial performance (without additional environment interact) is not dramatically superior to pure behavioral cloning.", "It would be helpful to see how well VINS compares against the alternatives for a much smaller number of demonstrations, say 5-20, a regime where we would expect initial performance to be poor." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 16, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 10, "suggestion_and_solution": 3, "total": 17 }
0.117647
0
0.117647
0.941176
0.235294
0.058824
0.588235
0.176471
17
1
0
1
1
1
1
1
1
2.235294
1.982775
0.252519
Few-shot Text Classification with Distributional Signatures
In this paper, we explore meta-learning for few-shot text classification. Meta-learning has shown strong performance in computer vision, where low-level patterns are transferable across learning tasks. However, directly applying this approach to text is challenging–lexical features highly informative for one task maybe insignificant for another. Thus, rather than learning solely from words, our model also leverages their distributional signatures, which encode pertinent word occurrence patterns. Our model is trained within a meta-learning framework to map these signatures into attention scores, which are then used to weight the lexical representations of words. We demonstrate that our model consistently outperforms prototypical networks learned on lexical knowledge (Snell et al., 2017) in both few-shot text classification and relation classification by a significant margin across six benchmark datasets (19.96% on average in 1-shot classification).
UPDATE: Based on the extensive improvements by the authors, I have updated my rating. However, I still have doubts about the potential of this approach to reach practically useful levels of accuracy. This paper introduces a simple method to weight pretrained lexical features for use in meta learning of few-shot text classification. The method boils down to weighting word inut features, in the form of pretrained word-embeddings, by attention computed from inverse document frequency and class local mutual information. The idea is that this measure of feature informativeness transfers between tasks, whereas lexical features themselves are highly task-specific. The approach is well motivated and is empirically shown to outperform existing approaches to few-shot text classification with a significant margin. While the improvement over existing approaches is quite substantial, I believe the paper should not be accepted to ICLR for the following reasons. First, the contribution is quite limited and not particularly novel. While two weight functions are proposed, the majority of improvement comes simply from normalizing IDF with attention. Based on existing work on delexicalized features for NLP tasks such as parsing, this is quite a straightforward extension. Given the limited contribution, a short paper seems a better fit. Second, the approach simply trades variance for bias. This brings us to the question of how likely the approach is to be a building block in bringing us towards a pratically useful few-shot classification method. Given the weak representational power of the model, I believe this is unlikely. I see a situation similar to syntactic parsing for low-resource languages, where a collection of simple techniques similar in spirit to the current approach, like delexicalization, brought results far above the naive baselines, but never approached practically useful results. I think this is a crucial point to address in meta-learning research in general to make sure we’re not just solving a toy problem with tailored heuristics. Additional notes: What is the motivation for using a BiLSTM to combine inverse document frequency and inverse class entropy? Is the sequence information at all useful, or would a simple projection and nonlinearity give the same result? The theoretical analysis is completely self-evident from the definition of the feature space. Replaing a feature with an equivalent feature of course gives the same result and I don’t see the need to “mathematize” this. The effect of approximating logistic regression with linear regression + calibration is not analyzed and it is not clear what the effect of this approximation is in the text classification scenario. I would suggest to compare to differentiating through a direct optimization of the logistic formulation, for example with Newton’s method, or plain SGD, as in Bertinetto et al. (2019). Table 1. Why not run the attention-based feature aggregator together with all algorithms. The main contribution is at the input representation level, and this should be applicable across algorithms. In fact, if we remove the BiLSTM which seems to have a very small effect the representation function does not contain learnable parameters in itself. Please provide the average across datasets in Table 1.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFew-shot Text Classification with Distributional Signatures\nIn this paper, we explore meta-learning for few-shot text classification. Meta-learning has shown strong performance in computer vision, where low-level patterns are transferable across learning tasks. However, directly applying this approach to text is challenging–lexical features highly informative for one task maybe insignificant for another. Thus, rather than learning solely from words, our model also leverages their distributional signatures, which encode pertinent word occurrence patterns. Our model is trained within a meta-learning framework to map these signatures into attention scores, which are then used to weight the lexical representations of words. We demonstrate that our model consistently outperforms prototypical networks learned on lexical knowledge (Snell et al., 2017) in both few-shot text classification and relation classification by a significant margin across six benchmark datasets (19.96% on average in 1-shot classification).", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nUPDATE: Based on the extensive improvements by the authors, I have updated my rating. However, I still have doubts about the potential of this approach to reach practically useful levels of accuracy. This paper introduces a simple method to weight pretrained lexical features for use in meta learning of few-shot text classification. The method boils down to weighting word inut features, in the form of pretrained word-embeddings, by attention computed from inverse document frequency and class local mutual information. The idea is that this measure of feature informativeness transfers between tasks, whereas lexical features themselves are highly task-specific. The approach is well motivated and is empirically shown to outperform existing approaches to few-shot text classification with a significant margin. While the improvement over existing approaches is quite substantial, I believe the paper should not be accepted to ICLR for the following reasons. First, the contribution is quite limited and not particularly novel. While two weight functions are proposed, the majority of improvement comes simply from normalizing IDF with attention. Based on existing work on delexicalized features for NLP tasks such as parsing, this is quite a straightforward extension. Given the limited contribution, a short paper seems a better fit. Second, the approach simply trades variance for bias. This brings us to the question of how likely the approach is to be a building block in bringing us towards a pratically useful few-shot classification method. Given the weak representational power of the model, I believe this is unlikely. I see a situation similar to syntactic parsing for low-resource languages, where a collection of simple techniques similar in spirit to the current approach, like delexicalization, brought results far above the naive baselines, but never approached practically useful results. I think this is a crucial point to address in meta-learning research in general to make sure we’re not just solving a toy problem with tailored heuristics. Additional notes: What is the motivation for using a BiLSTM to combine inverse document frequency and inverse class entropy? Is the sequence information at all useful, or would a simple projection and nonlinearity give the same result? The theoretical analysis is completely self-evident from the definition of the feature space. Replaing a feature with an equivalent feature of course gives the same result and I don’t see the need to “mathematize” this. The effect of approximating logistic regression with linear regression + calibration is not analyzed and it is not clear what the effect of this approximation is in the text classification scenario. I would suggest to compare to differentiating through a direct optimization of the logistic formulation, for example with Newton’s method, or plain SGD, as in Bertinetto et al. (2019). Table 1. Why not run the attention-based feature aggregator together with all algorithms. The main contribution is at the input representation level, and this should be applicable across algorithms. In fact, if we remove the BiLSTM which seems to have a very small effect the representation function does not contain learnable parameters in itself. Please provide the average across datasets in Table 1.", "role": "assistant" } ]
[ "UPDATE: Based on the extensive improvements by the authors, I have updated my rating.", "However, I still have doubts about the potential of this approach to reach practically useful levels of accuracy.", "This paper introduces a simple method to weight pretrained lexical features for use in meta learning of few-shot text classification.", "The method boils down to weighting word inut features, in the form of pretrained word-embeddings, by attention computed from inverse document frequency and class local mutual information.", "The idea is that this measure of feature informativeness transfers between tasks, whereas lexical features themselves are highly task-specific.", "The approach is well motivated and is empirically shown to outperform existing approaches to few-shot text classification with a significant margin.", "While the improvement over existing approaches is quite substantial, I believe the paper should not be accepted to ICLR for the following reasons.", "First, the contribution is quite limited and not particularly novel.", "While two weight functions are proposed, the majority of improvement comes simply from normalizing IDF with attention.", "Based on existing work on delexicalized features for NLP tasks such as parsing, this is quite a straightforward extension.", "Given the limited contribution, a short paper seems a better fit.", "Second, the approach simply trades variance for bias.", "This brings us to the question of how likely the approach is to be a building block in bringing us towards a pratically useful few-shot classification method.", "Given the weak representational power of the model, I believe this is unlikely.", "I see a situation similar to syntactic parsing for low-resource languages, where a collection of simple techniques similar in spirit to the current approach, like delexicalization, brought results far above the naive baselines, but never approached practically useful results.", "I think this is a crucial point to address in meta-learning research in general to make sure we’re not just solving a toy problem with tailored heuristics.", "Additional notes: What is the motivation for using a BiLSTM to combine inverse document frequency and inverse class entropy?", "Is the sequence information at all useful, or would a simple projection and nonlinearity give the same result?", "The theoretical analysis is completely self-evident from the definition of the feature space.", "Replaing a feature with an equivalent feature of course gives the same result and I don’t see the need to “mathematize” this.", "The effect of approximating logistic regression with linear regression + calibration is not analyzed and it is not clear what the effect of this approximation is in the text classification scenario.", "I would suggest to compare to differentiating through a direct optimization of the logistic formulation, for example with Newton’s method, or plain SGD, as in Bertinetto et al.", "(2019).", "Table 1.", "Why not run the attention-based feature aggregator together with all algorithms.", "The main contribution is at the input representation level, and this should be applicable across algorithms.", "In fact, if we remove the BiLSTM which seems to have a very small effect the representation function does not contain learnable parameters in itself.", "Please provide the average across datasets in Table 1." ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 7, "materials_and_methods": 22, "praise": 3, "presentation_and_reporting": 9, "results_and_discussion": 7, "suggestion_and_solution": 6, "total": 28 }
0.25
0
0.25
0.785714
0.107143
0.321429
0.25
0.214286
28
1
0
1
1
1
1
1
1
2.178571
-1.372475
3.551047
Escaping Saddle Points Faster with Stochastic Momentum
Stochastic gradient descent (SGD) with stochastic momentum is popular in nonconvex stochastic optimization and particularly for the training of deep neural networks. In standard SGD, parameters are updated by improving along the path of the gradient at the current iterate on a batch of examples, where the addition of a ``momentum** term biases the update in the direction of the previous change in parameters. In non-stochastic convex optimization one can show that a momentum adjustment provably reduces convergence time in many settings, yet such results have been elusive in the stochastic and non-convex settings. At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e.g. ADAM, AMSGrad, etc. Yet theoretical justification for the use of stochastic momentum has remained a significant open question. In this paper we propose an answer: stochastic momentum improves deep network training because it modifies SGD to escape saddle points faster and, consequently, to more quickly find a second order stationary point. Our theoretical results also shed light on the related question of how to choose the ideal momentum parameter--our analysis suggests that should be large (close to 1), which comports with empirical findings. We also provide experimental findings that further validate these conclusions.
This paper makes an interesting theoretical contribution; namely, that SGD with momentum (and with a slight modification to the step-size rule) is guaranteed to quickly converge to a second-order stationary point, implying it quickly escapes saddle points. SGD with momentum is widely used in the practice of deep learning, but a theoretical analysis has remained largely elusive. This paper sheds light theoretical properties justifying its use for deep learning. Although the paper makes assumptions (e.g., twice differentiable, with smooth Hessian) that are not valid for the most widely-used deep learning models, the theoretical contributions of this paper should nonetheless be of interest to researchers in optimization for machine learning. I recommend it be accepted. The experiments reported in the paper, including those used to validate the required properties, are for small toy problems. This is reasonable given that the main contribution of the paper is theoretical. However, I would have given a higher rating if some further exploration of the validity of these properties was carried out for problems closer to those of interest to the broader ICLR community. Even if the assumptions aren*t satisfied everywhere for typical deep networks, they may be satisfied at most points encountered during training, which would make the contribution even more compelling. This may also help to understand some of the limitations of this analysis. One other limitation seems to be that Theorem 1 requires using a step size which seems to be much smaller than what one may hope to use in practice. Can you comment on this?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEscaping Saddle Points Faster with Stochastic Momentum\nStochastic gradient descent (SGD) with stochastic momentum is popular in nonconvex stochastic optimization and particularly for the training of deep neural networks. In standard SGD, parameters are updated by improving along the path of the gradient at the current iterate on a batch of examples, where the addition of a ``momentum** term biases the update in the direction of the previous change in parameters. In non-stochastic convex optimization one can show that a momentum adjustment provably reduces convergence time in many settings, yet such results have been elusive in the stochastic and non-convex settings. At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e.g. ADAM, AMSGrad, etc. Yet theoretical justification for the use of stochastic momentum has remained a significant open question. In this paper we propose an answer: stochastic momentum improves deep network training because it modifies SGD to escape saddle points faster and, consequently, to more quickly find a second order stationary point. Our theoretical results also shed light on the related question of how to choose the ideal momentum parameter--our analysis suggests that should be large (close to 1), which comports with empirical findings. We also provide experimental findings that further validate these conclusions.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper makes an interesting theoretical contribution; namely, that SGD with momentum (and with a slight modification to the step-size rule) is guaranteed to quickly converge to a second-order stationary point, implying it quickly escapes saddle points. SGD with momentum is widely used in the practice of deep learning, but a theoretical analysis has remained largely elusive. This paper sheds light theoretical properties justifying its use for deep learning. Although the paper makes assumptions (e.g., twice differentiable, with smooth Hessian) that are not valid for the most widely-used deep learning models, the theoretical contributions of this paper should nonetheless be of interest to researchers in optimization for machine learning. I recommend it be accepted. The experiments reported in the paper, including those used to validate the required properties, are for small toy problems. This is reasonable given that the main contribution of the paper is theoretical. However, I would have given a higher rating if some further exploration of the validity of these properties was carried out for problems closer to those of interest to the broader ICLR community. Even if the assumptions aren*t satisfied everywhere for typical deep networks, they may be satisfied at most points encountered during training, which would make the contribution even more compelling. This may also help to understand some of the limitations of this analysis. One other limitation seems to be that Theorem 1 requires using a step size which seems to be much smaller than what one may hope to use in practice. Can you comment on this?", "role": "assistant" } ]
[ "This paper makes an interesting theoretical contribution; namely, that SGD with momentum (and with a slight modification to the step-size rule) is guaranteed to quickly converge to a second-order stationary point, implying it quickly escapes saddle points.", "SGD with momentum is widely used in the practice of deep learning, but a theoretical analysis has remained largely elusive.", "This paper sheds light theoretical properties justifying its use for deep learning.", "Although the paper makes assumptions (e.g., twice differentiable, with smooth Hessian) that are not valid for the most widely-used deep learning models, the theoretical contributions of this paper should nonetheless be of interest to researchers in optimization for machine learning.", "I recommend it be accepted.", "The experiments reported in the paper, including those used to validate the required properties, are for small toy problems.", "This is reasonable given that the main contribution of the paper is theoretical.", "However, I would have given a higher rating if some further exploration of the validity of these properties was carried out for problems closer to those of interest to the broader ICLR community.", "Even if the assumptions aren*t satisfied everywhere for typical deep networks, they may be satisfied at most points encountered during training, which would make the contribution even more compelling.", "This may also help to understand some of the limitations of this analysis.", "One other limitation seems to be that Theorem 1 requires using a step size which seems to be much smaller than what one may hope to use in practice.", "Can you comment on this?" ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 7, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 5, "total": 12 }
0.166667
0
0.416667
0.583333
0.333333
0.083333
0.25
0.416667
12
1
0
1
1
1
1
1
1
2.25
2.234218
0.015782
LEARNING EXECUTION THROUGH NEURAL CODE FUSION
As the performance of computer systems stagnates due to the end of Moore’s Law, there is a need for new models that can understand and optimize the execution of general purpose code. While there is a growing body of work on using Graph Neural Networks (GNNs) to learn static representations of source code, these representations do not understand how code executes at runtime. In this work, we propose a new approach using GNNs to learn fused representations of general source code and its execution. Our approach defines a multi-task GNN over low-level representations of source code and program state (i.e., assembly code and dynamic memory states), converting complex source code constructs and data structures into a simpler, more uniform format. We show that this leads to improved performance over similar methods that do not use execution and it opens the door to applying GNN models to new tasks that would not be feasible from static code alone. As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively. Moreover, we use the learned fused graph embeddings to demonstrate transfer learning with high performance on an indirectly related algorithm classification task.
This paper presents a novel improvement in methodology for learning code execution (at the level of branch-predictions and prefetching). They combine static program description with dynamic program state into one graph neural network, for the first time, to achieve significant performance gains on standard benchmarks. I would vote to accept this paper. They appear to have developed a new model structure and interface to the program information (i.e. inputs to the model), and the design decisions appear thoughtful, sensible, and well-justified (e.g. use of assembly code). The presentation is mostly clear, with a good balance of background material, method description, and experiment results. Taken at face value, the results are impressive, although I am not familiar enough with this field to assess the fairness of comparison against the baselines. For example, it*s a little unclear what the difference is vs previous baselines just from switching to source-code-as-input to assembly-code-as-input? The study on memory representations (categorical vs scalar vs binary) is a helpful component which adds its own value, and the context for popularity of the alternatives is described. Few details as to implementation are discussed, although the code is included in the submission, and after a quick glance appears substantial.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLEARNING EXECUTION THROUGH NEURAL CODE FUSION\nAs the performance of computer systems stagnates due to the end of Moore’s Law, there is a need for new models that can understand and optimize the execution of general purpose code. While there is a growing body of work on using Graph Neural Networks (GNNs) to learn static representations of source code, these representations do not understand how code executes at runtime. In this work, we propose a new approach using GNNs to learn fused representations of general source code and its execution. Our approach defines a multi-task GNN over low-level representations of source code and program state (i.e., assembly code and dynamic memory states), converting complex source code constructs and data structures into a simpler, more uniform format. We show that this leads to improved performance over similar methods that do not use execution and it opens the door to applying GNN models to new tasks that would not be feasible from static code alone. As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively. Moreover, we use the learned fused graph embeddings to demonstrate transfer learning with high performance on an indirectly related algorithm classification task.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a novel improvement in methodology for learning code execution (at the level of branch-predictions and prefetching). They combine static program description with dynamic program state into one graph neural network, for the first time, to achieve significant performance gains on standard benchmarks. I would vote to accept this paper. They appear to have developed a new model structure and interface to the program information (i.e. inputs to the model), and the design decisions appear thoughtful, sensible, and well-justified (e.g. use of assembly code). The presentation is mostly clear, with a good balance of background material, method description, and experiment results. Taken at face value, the results are impressive, although I am not familiar enough with this field to assess the fairness of comparison against the baselines. For example, it*s a little unclear what the difference is vs previous baselines just from switching to source-code-as-input to assembly-code-as-input? The study on memory representations (categorical vs scalar vs binary) is a helpful component which adds its own value, and the context for popularity of the alternatives is described. Few details as to implementation are discussed, although the code is included in the submission, and after a quick glance appears substantial.", "role": "assistant" } ]
[ "This paper presents a novel improvement in methodology for learning code execution (at the level of branch-predictions and prefetching).", "They combine static program description with dynamic program state into one graph neural network, for the first time, to achieve significant performance gains on standard benchmarks.", "I would vote to accept this paper.", "They appear to have developed a new model structure and interface to the program information (i.e.", "inputs to the model), and the design decisions appear thoughtful, sensible, and well-justified (e.g.", "use of assembly code).", "The presentation is mostly clear, with a good balance of background material, method description, and experiment results.", "Taken at face value, the results are impressive, although I am not familiar enough with this field to assess the fairness of comparison against the baselines.", "For example, it*s a little unclear what the difference is vs previous baselines just from switching to source-code-as-input to assembly-code-as-input?", "The study on memory representations (categorical vs scalar vs binary) is a helpful component which adds its own value, and the context for popularity of the alternatives is described.", "Few details as to implementation are discussed, although the code is included in the submission, and after a quick glance appears substantial." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 10, "praise": 4, "presentation_and_reporting": 4, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 11 }
0.181818
0.090909
0.090909
0.909091
0.363636
0.363636
0.181818
0.181818
11
1
1
1
1
1
1
1
1
2.363636
2.300507
0.06313
I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively
The learning of hierarchical representations for image classification has experienced an impressive series of successes due in part to the availability of large-scale labeled data for training. On the other hand, the trained classifiers have traditionally been evaluated on a handful of test images, which are deemed to be extremely sparsely distributed in the space of all natural images. It is thus questionable whether recent performance improvements on the excessively re-used test sets generalize to real-world natural images with much richer content variations. In addition, studies on adversarial learning show that it is effortless to construct adversarial examples that fool nearly all image classifiers, adding more complications to relative performance comparison of existing models. This work presents an efficient framework for comparing image classifiers, which we name the MAximum Discrepancy (MAD) competition. Rather than comparing image classifiers on fixed test sets, we adaptively sample a test set from an arbitrarily large corpus of unlabeled images so as to maximize the discrepancies between the classifiers, measured by the distance over WordNet hierarchy. Human labeling on the resulting small and model-dependent image sets reveals the relative performance of the competing classifiers and provides useful insights on potential ways to improve them. We report the MAD competition results of eleven ImageNet classifiers while noting that the framework is readily extensible and cost-effective to add future classifiers into the competition.
The paper proposed a novel image classifier comparison approach that went beyond one fixed testing set for all. Instead, for a pair of classifiers to be compared, it advocated to sample their *most disagreed* test set from a large corpus of unlabeled images. The level of disagreement was measured by a semantic-aware distance derived from WordNet ontology. Because of the efficacy of such *worst-case* comparison, the needed set size is very small and thus minimizes the human annotation workload. The proposed MAD competition distinguishes classifiers by finding their respective counterexamples. It is therefore an *error spotting* mechanism, rather than a drop-in replacement of standard test accuracy. I feel the approach to implicitly assume that the classifiers to be compared are already *reasonably accurate*; since if not, both classifiers might be easily falsified by certain trivial examples, making the *disagreed examples* not as meaningful. If that is true, I would suggest the authors to make this hidden assumption clearer in the paper The idea shows clear liaison to the *differential testing* concept in software engineering besides the cited work of perceptual quality assessment. The idea has a cross-disciplinary nature and is fairly interesting to me. I can see the paper to be of interest to a quite broad audience and can motivate many subsequent works. One minor comment: for images in *Case III*, the authors considered them *contribute little to performance comparison between the two classifiers* and therefore did not source labels for them. However, since the authors adopted an affinity-aware distance, two incorrect predictions can still be compared based on their semantic tree distances to the true class.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nI Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively\nThe learning of hierarchical representations for image classification has experienced an impressive series of successes due in part to the availability of large-scale labeled data for training. On the other hand, the trained classifiers have traditionally been evaluated on a handful of test images, which are deemed to be extremely sparsely distributed in the space of all natural images. It is thus questionable whether recent performance improvements on the excessively re-used test sets generalize to real-world natural images with much richer content variations. In addition, studies on adversarial learning show that it is effortless to construct adversarial examples that fool nearly all image classifiers, adding more complications to relative performance comparison of existing models. This work presents an efficient framework for comparing image classifiers, which we name the MAximum Discrepancy (MAD) competition. Rather than comparing image classifiers on fixed test sets, we adaptively sample a test set from an arbitrarily large corpus of unlabeled images so as to maximize the discrepancies between the classifiers, measured by the distance over WordNet hierarchy. Human labeling on the resulting small and model-dependent image sets reveals the relative performance of the competing classifiers and provides useful insights on potential ways to improve them. We report the MAD competition results of eleven ImageNet classifiers while noting that the framework is readily extensible and cost-effective to add future classifiers into the competition.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposed a novel image classifier comparison approach that went beyond one fixed testing set for all. Instead, for a pair of classifiers to be compared, it advocated to sample their *most disagreed* test set from a large corpus of unlabeled images. The level of disagreement was measured by a semantic-aware distance derived from WordNet ontology. Because of the efficacy of such *worst-case* comparison, the needed set size is very small and thus minimizes the human annotation workload. The proposed MAD competition distinguishes classifiers by finding their respective counterexamples. It is therefore an *error spotting* mechanism, rather than a drop-in replacement of standard test accuracy. I feel the approach to implicitly assume that the classifiers to be compared are already *reasonably accurate*; since if not, both classifiers might be easily falsified by certain trivial examples, making the *disagreed examples* not as meaningful. If that is true, I would suggest the authors to make this hidden assumption clearer in the paper The idea shows clear liaison to the *differential testing* concept in software engineering besides the cited work of perceptual quality assessment. The idea has a cross-disciplinary nature and is fairly interesting to me. I can see the paper to be of interest to a quite broad audience and can motivate many subsequent works. One minor comment: for images in *Case III*, the authors considered them *contribute little to performance comparison between the two classifiers* and therefore did not source labels for them. However, since the authors adopted an affinity-aware distance, two incorrect predictions can still be compared based on their semantic tree distances to the true class.", "role": "assistant" } ]
[ "The paper proposed a novel image classifier comparison approach that went beyond one fixed testing set for all.", "Instead, for a pair of classifiers to be compared, it advocated to sample their *most disagreed* test set from a large corpus of unlabeled images.", "The level of disagreement was measured by a semantic-aware distance derived from WordNet ontology.", "Because of the efficacy of such *worst-case* comparison, the needed set size is very small and thus minimizes the human annotation workload.", "The proposed MAD competition distinguishes classifiers by finding their respective counterexamples.", "It is therefore an *error spotting* mechanism, rather than a drop-in replacement of standard test accuracy.", "I feel the approach to implicitly assume that the classifiers to be compared are already *reasonably accurate*; since if not, both classifiers might be easily falsified by certain trivial examples, making the *disagreed examples* not as meaningful.", "If that is true, I would suggest the authors to make this hidden assumption clearer in the paper The idea shows clear liaison to the *differential testing* concept in software engineering besides the cited work of perceptual quality assessment.", "The idea has a cross-disciplinary nature and is fairly interesting to me.", "I can see the paper to be of interest to a quite broad audience and can motivate many subsequent works.", "One minor comment: for images in *Case III*, the authors considered them *contribute little to performance comparison between the two classifiers* and therefore did not source labels for them.", "However, since the authors adopted an affinity-aware distance, two incorrect predictions can still be compared based on their semantic tree distances to the true class." ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 9, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 3, "total": 12 }
0.25
0.083333
0.166667
0.75
0.166667
0.25
0.25
0.25
12
1
1
1
1
1
1
1
1
2.166667
2.150884
0.015782
Deep Network classification by Scattering and Homotopy dictionary learning
We introduce a sparse scattering deep convolutional neural network, which provides a simple model to analyze properties of deep representation learning for classification. Learning a single dictionary matrix with a classifier yields a higher classification accuracy than AlexNet over the ImageNet ILSVRC2012 dataset. The network first applies a scattering transform which linearizes variabilities due to geometric transformations such as translations and small deformations. A sparse l1 dictionary coding reduces intra-class variability while preserving class separation through projections over unions of linear spaces. It is implemented in a deep convolutional network with a homotopy algorithm having an exponential convergence. A convergence proof is given in a general framework including ALISTA. Classification results are analyzed over ImageNet.
The paper proposes a network architecture composed of three interpretable components followed by a simple MLP classifier. It first applies a scattering transform followed by a learned linear projection (to reduce dimensionality). A sparse representation of these coefficients is then obtained using dictionary learning. The projection, dictionary and MLP classifier are jointly trained to minimize the classification loss. Results show that the model outperforms AlexNet on the Imagenet benchmark. The paper is well written. The main contribution of the work is to present an architecture with composed of mathematically interpretable components achieving very high empirical performance. I find that these results are very significant. The second contribution is to propose a dictionary learning algorithm that uses ISTC and can be trained with gradient descent. I think that it would be interesting to add an ablation showing the change in performance by changing N (the number of iterations in the ISTC net). Also, it would make sense to run FISTA or ISTA unrolls to see if the benefits of the faster convergence also affect classification performance. It would be good to add to Table 1 the number of trainable parameters of each variant. I find it a bit confusing to refer to the setting in which W is learned as ALISTA, as to me ALISTA implies using analytical W. This is clear later in the text (and makes sense from a computational standpoint). Would be good to clarify it early in the text. Finally the paper presents a proof of exponential convergence for ALISTA in the noisy case. While this is an interesting result, it is not very closely linked to the main focus of the work.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Network classification by Scattering and Homotopy dictionary learning\nWe introduce a sparse scattering deep convolutional neural network, which provides a simple model to analyze properties of deep representation learning for classification. Learning a single dictionary matrix with a classifier yields a higher classification accuracy than AlexNet over the ImageNet ILSVRC2012 dataset. The network first applies a scattering transform which linearizes variabilities due to geometric transformations such as translations and small deformations. A sparse l1 dictionary coding reduces intra-class variability while preserving class separation through projections over unions of linear spaces. It is implemented in a deep convolutional network with a homotopy algorithm having an exponential convergence. A convergence proof is given in a general framework including ALISTA. Classification results are analyzed over ImageNet.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a network architecture composed of three interpretable components followed by a simple MLP classifier. It first applies a scattering transform followed by a learned linear projection (to reduce dimensionality). A sparse representation of these coefficients is then obtained using dictionary learning. The projection, dictionary and MLP classifier are jointly trained to minimize the classification loss. Results show that the model outperforms AlexNet on the Imagenet benchmark. The paper is well written. The main contribution of the work is to present an architecture with composed of mathematically interpretable components achieving very high empirical performance. I find that these results are very significant. The second contribution is to propose a dictionary learning algorithm that uses ISTC and can be trained with gradient descent. I think that it would be interesting to add an ablation showing the change in performance by changing N (the number of iterations in the ISTC net). Also, it would make sense to run FISTA or ISTA unrolls to see if the benefits of the faster convergence also affect classification performance. It would be good to add to Table 1 the number of trainable parameters of each variant. I find it a bit confusing to refer to the setting in which W is learned as ALISTA, as to me ALISTA implies using analytical W. This is clear later in the text (and makes sense from a computational standpoint). Would be good to clarify it early in the text. Finally the paper presents a proof of exponential convergence for ALISTA in the noisy case. While this is an interesting result, it is not very closely linked to the main focus of the work.", "role": "assistant" } ]
[ "The paper proposes a network architecture composed of three interpretable components followed by a simple MLP classifier.", "It first applies a scattering transform followed by a learned linear projection (to reduce dimensionality).", "A sparse representation of these coefficients is then obtained using dictionary learning.", "The projection, dictionary and MLP classifier are jointly trained to minimize the classification loss.", "Results show that the model outperforms AlexNet on the Imagenet benchmark.", "The paper is well written.", "The main contribution of the work is to present an architecture with composed of mathematically interpretable components achieving very high empirical performance.", "I find that these results are very significant.", "The second contribution is to propose a dictionary learning algorithm that uses ISTC and can be trained with gradient descent.", "I think that it would be interesting to add an ablation showing the change in performance by changing N (the number of iterations in the ISTC net).", "Also, it would make sense to run FISTA or ISTA unrolls to see if the benefits of the faster convergence also affect classification performance.", "It would be good to add to Table 1 the number of trainable parameters of each variant.", "I find it a bit confusing to refer to the setting in which W is learned as ALISTA, as to me ALISTA implies using analytical W. This is clear later in the text (and makes sense from a computational standpoint).", "Would be good to clarify it early in the text.", "Finally the paper presents a proof of exponential convergence for ALISTA in the noisy case.", "While this is an interesting result, it is not very closely linked to the main focus of the work." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 9, "praise": 4, "presentation_and_reporting": 8, "results_and_discussion": 6, "suggestion_and_solution": 4, "total": 16 }
0.125
0
0.1875
0.5625
0.25
0.5
0.375
0.25
16
1
0
1
1
1
1
1
1
2.25
2.107958
0.142042
Data-Independent Neural Pruning via Coresets
Previous work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy. Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources. The majority of the compression methods are based on heuristics and offer no worst-case guarantees on the trade-off between the compression rate and the approximation error for an arbitrarily new sample. We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample. Our method is based on the coreset framework, which finds a small weighted subset of points that provably approximates the original inputs. Specifically, we approximate the output of a layer of neurons by a coreset of neurons in the previous layer and discard the rest. We apply this framework in a layer-by-layer fashion from the top to the bottom. Unlike previous works, our coreset is data independent, meaning that it provably guarantees the accuracy of the function for any input , including an adversarial one. We demonstrate the effectiveness of our method on popular network architectures. In particular, our coresets yield 90% compression of the LeNet-300-100 architecture on MNIST while improving the accuracy.
The authors proposed a model compressing method based on coreset framework. The goal of the paper is to reduce the number of neurons. The basic idea is sampling the neurons on each layer with probability equal to the neuron*s max share among all the outputs to the next layer, and updating the weights associated with the remained neurons. Another main contribution is the authors provided theoretical analysis to guarantee the accuracy vs compression trade-off for all possible inputs. Pros: The proposed method is easy to understand and seems to make sense. The theoretical analysis seems strong. The experiment results on two datasets show the proposed method achieved high compression rate and improvement of accuracy. Cons: Despite the theoretical guarantee, it is not as clear on the value of the proposed method in real world. I would be better to test on more datasets and networks to verify the effectiveness of the proposed compressing method, as it claimed to be data-independent. Although the method achieved very good experiment results, its contribution to the high accuracy is unclear, since the networks were fine-tuned after the compression. So how do we exactly evaluate the accuracy vs compression trade-off when there is no such trade-off shown in the experiments? Questions and suggestions: In the Fig 2, it seems that the performances of the proposed method and the percentile-based method should be close to each other, and the uniform sampling method should be worse than them. However the results are opposite. If it was not incorrect labeling in the figure, it would be good to add some analyses about this result. To solve the second point in *Cons*, is it possible to show the accuracy of the compressed model without fine-tune? Or still fine-tune the model but simply set u(q) = w(q) in the line 8 algorithm 1?
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nData-Independent Neural Pruning via Coresets\nPrevious work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy. Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources. The majority of the compression methods are based on heuristics and offer no worst-case guarantees on the trade-off between the compression rate and the approximation error for an arbitrarily new sample. We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample. Our method is based on the coreset framework, which finds a small weighted subset of points that provably approximates the original inputs. Specifically, we approximate the output of a layer of neurons by a coreset of neurons in the previous layer and discard the rest. We apply this framework in a layer-by-layer fashion from the top to the bottom. Unlike previous works, our coreset is data independent, meaning that it provably guarantees the accuracy of the function for any input , including an adversarial one. We demonstrate the effectiveness of our method on popular network architectures. In particular, our coresets yield 90% compression of the LeNet-300-100 architecture on MNIST while improving the accuracy.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors proposed a model compressing method based on coreset framework. The goal of the paper is to reduce the number of neurons. The basic idea is sampling the neurons on each layer with probability equal to the neuron*s max share among all the outputs to the next layer, and updating the weights associated with the remained neurons. Another main contribution is the authors provided theoretical analysis to guarantee the accuracy vs compression trade-off for all possible inputs. Pros: The proposed method is easy to understand and seems to make sense. The theoretical analysis seems strong. The experiment results on two datasets show the proposed method achieved high compression rate and improvement of accuracy. Cons: Despite the theoretical guarantee, it is not as clear on the value of the proposed method in real world. I would be better to test on more datasets and networks to verify the effectiveness of the proposed compressing method, as it claimed to be data-independent. Although the method achieved very good experiment results, its contribution to the high accuracy is unclear, since the networks were fine-tuned after the compression. So how do we exactly evaluate the accuracy vs compression trade-off when there is no such trade-off shown in the experiments? Questions and suggestions: In the Fig 2, it seems that the performances of the proposed method and the percentile-based method should be close to each other, and the uniform sampling method should be worse than them. However the results are opposite. If it was not incorrect labeling in the figure, it would be good to add some analyses about this result. To solve the second point in *Cons*, is it possible to show the accuracy of the compressed model without fine-tune? Or still fine-tune the model but simply set u(q) = w(q) in the line 8 algorithm 1?", "role": "assistant" } ]
[ "The authors proposed a model compressing method based on coreset framework.", "The goal of the paper is to reduce the number of neurons.", "The basic idea is sampling the neurons on each layer with probability equal to the neuron*s max share among all the outputs to the next layer, and updating the weights associated with the remained neurons.", "Another main contribution is the authors provided theoretical analysis to guarantee the accuracy vs compression trade-off for all possible inputs.", "Pros: The proposed method is easy to understand and seems to make sense.", "The theoretical analysis seems strong.", "The experiment results on two datasets show the proposed method achieved high compression rate and improvement of accuracy.", "Cons: Despite the theoretical guarantee, it is not as clear on the value of the proposed method in real world.", "I would be better to test on more datasets and networks to verify the effectiveness of the proposed compressing method, as it claimed to be data-independent.", "Although the method achieved very good experiment results, its contribution to the high accuracy is unclear, since the networks were fine-tuned after the compression.", "So how do we exactly evaluate the accuracy vs compression trade-off when there is no such trade-off shown in the experiments?", "Questions and suggestions: In the Fig 2, it seems that the performances of the proposed method and the percentile-based method should be close to each other, and the uniform sampling method should be worse than them.", "However the results are opposite.", "If it was not incorrect labeling in the figure, it would be good to add some analyses about this result.", "To solve the second point in *Cons*, is it possible to show the accuracy of the compressed model without fine-tune?", "Or still fine-tune the model but simply set u(q) = w(q) in the line 8 algorithm 1?" ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 14, "praise": 3, "presentation_and_reporting": 4, "results_and_discussion": 8, "suggestion_and_solution": 4, "total": 16 }
0.1875
0.0625
0.0625
0.875
0.1875
0.25
0.5
0.25
16
1
1
1
1
1
1
1
1
2.375
2.232958
0.142042
Ensemble Distribution Distillation
Ensembles of models often yield improvements in system performance. These ensemble approaches have also been empirically shown to yield robust measures of uncertainty, and are capable of distinguishing between different forms of uncertainty. However, ensembles come at a computational and memory cost which may be prohibitive for many applications. There has been significant work done on the distillation of an ensemble into a single model. Such approaches decrease computational cost and allow a single model to achieve an accuracy comparable to that of an ensemble. However, information about the diversity of the ensemble, which can yield estimates of different forms of uncertainty, is lost. This work considers the novel task of Ensemble Distribution Distillation (EnD^2) - distilling the distribution of the predictions from an ensemble, rather than just the average prediction, into a single model. EnD^2 enables a single model to retain both the improved classification performance of ensemble distillation as well as information about the diversity of the ensemble, which is useful for uncertainty estimation. A solution for EnD^2 based on Prior Networks, a class of models which allow a single neural network to explicitly model a distribution over output distributions, is proposed in this work. The properties of EnD^2 are investigated on both an artificial dataset, and on the CIFAR-10, CIFAR-100 and TinyImageNet datasets, where it is shown that EnD^2 can approach the classification performance of an ensemble, and outperforms both standard DNNs and Ensemble Distillation on the tasks of misclassification and out-of-distribution input detection.
Summary: Ensembles of probabilistic models (e.g. for classification) provide measures of both data uncertainty (i.e. how uncertain each model is on average) and model uncertainty (i.e. how much the models disagree in their predictions). When naively distilling an ensemble to a single model, the ability to decompose total uncertainty into data uncertainty and model uncertainty is lost. This paper describes a method for ensemble distillation that retains both types of uncertainty in the case of classification. The idea is to train a prior network (i.e. a conditional Dirichlet distribution) to model the distribution of categorical probabilities within the ensemble. Both total uncertainty and data uncertainty can then be computed analytically from the prior network. Pros: The paper considers an interesting problem, and makes a clear contribution towards addressing it. The paper motivates the problem well, and explains the contribution and the method*s limitations clearly. The proposed method is simple, but well motivated, sound, and well explained. The paper is very well written and easy to read. I particularly appreciated the toy experiment in section 4 and the visualization in figure 3, which showcase clearly what the method does. The experiments are thorough, and the results are discussed fairly and objectively. Cons: The scope of the paper and the method is limited to the problem of probabilistic classification. However, one could have a more general ensemble of conditional or unconditional distributions. The method could in principle be applied to this setting, by having a hypernetwork learn the distribution of the distributional parameters of the models in the ensemble (the method presented in the paper is a special case of this, where the hypernetwork is a prior network and the distributional parameters are categorical probabilities). However, it*s not clear that the method would scale to models with arbitrary distributional parameters. I would suggest that the paper make it clear from the beginning that the scope is probabilistic classification, and at the end discuss the extent to which this is a limitation, and how the method could potentially be extended to other kinds of ensembles. The prior network used is essentially a conditional Dirichlet distribution, which (as the paper clearly acknowledges) may not always be flexible enough. A more flexible prior network could be a mixture of Dirichlet distributions, where the mixing coefficients and the parameters of each mixture component would be functions of the input, similarly to mixture density networks (http://publications.aston.ac.uk/id/eprint/373/) but with Dirichets instead of Gaussians. I believe that equations (9) and (10) would still be tractable in that case, as it*s tractable to compute expectations under mixtures if the expectations under mixture components are tractable. One limitation of the approach is that the prior network may not give accurate predictions for inputs it hasn*t been trained on (as the paper discusses and section 4 clearly demonstrates). It*s not clear how this problem can be overcome in general, and further research may be needed in that direction. Some of the results in Table 4 are puzzling (as the paper also acknowledges). In particular, the EnD model should be able to retain the performance of the ensemble when using total uncertainty but it doesn*t. Also, using knowledge uncertainty doesn*t always seem to be better than using total uncertainty, which to some extent defeats the purpose of the method (at least in this particular example). It would be good to investigate further these results. In any case, I appreciate that the paper acknowledges these results, but avoids unjustified speculation about what may be causing them. Decision: Overall, this is good work and I*m happy to recommend acceptance. There are some limitations to the method, but these can be seen as motivation for future work. Minor suggestions for improvement: This older work is relevant and could be cited (but there is no obligation to do so). On compressing ensembles: - Model compression, https://dl.acm.org/citation.cfm?id=1150464 - Compact approximations to Bayesian predictive distributions, https://dl.acm.org/citation.cfm?id=1102457 - Distilling model knowledge, https://arxiv.org/abs/1510.02437 On information-theoretic measures for decomposing total uncertainty as in eq. (4): - Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning, https://arxiv.org/abs/1710.07283 *[Knowledge uncertainty] arises when the test input comes from a different distribution than the one that generated the training data* This is only one way knowledge uncertainty can arise, it could also arise when the test input comes from the same distribution that generated training data, but there aren*t enough training data available. Some parentheses are dangling at the bottom of page 2, top of page 3 and middle of page 4. In figure 1, it*d be good to make clear in the caption that the figure is illustrating the simplex of categorical probabilities. The last paragraph of section 2 uses the term *posterior* repeatedly to refer to P(y|x, heta) which is confusing. I would call P(y|x, heta) the *model prediction* or something like that. Eq. (5) should be without the negative sign I think. In section 3, I would use a different symbol (e.g. phi) to denote the parameters of the prior network, to clearly distinguish them from the parameters of the models in the ensemble. *Optimization of the KL-divergence between distributions with limited non-zero common support is particularly difficult* Some more evidence in support of this claim is needed I think; either explain why or provide a reference that does. In section 4, the text says that EnD has *a minor degradation in performance* but table 1 seems to show otherwise. Also, the results of EnD in table 1 and table 2 are different, which makes me think there may be a typo somewhere. Making the references have consistent format and correct capitalization (e.g. DNNs, Bayesian) would make the paper look even more polished.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnsemble Distribution Distillation\nEnsembles of models often yield improvements in system performance. These ensemble approaches have also been empirically shown to yield robust measures of uncertainty, and are capable of distinguishing between different forms of uncertainty. However, ensembles come at a computational and memory cost which may be prohibitive for many applications. There has been significant work done on the distillation of an ensemble into a single model. Such approaches decrease computational cost and allow a single model to achieve an accuracy comparable to that of an ensemble. However, information about the diversity of the ensemble, which can yield estimates of different forms of uncertainty, is lost. This work considers the novel task of Ensemble Distribution Distillation (EnD^2) - distilling the distribution of the predictions from an ensemble, rather than just the average prediction, into a single model. EnD^2 enables a single model to retain both the improved classification performance of ensemble distillation as well as information about the diversity of the ensemble, which is useful for uncertainty estimation. A solution for EnD^2 based on Prior Networks, a class of models which allow a single neural network to explicitly model a distribution over output distributions, is proposed in this work. The properties of EnD^2 are investigated on both an artificial dataset, and on the CIFAR-10, CIFAR-100 and TinyImageNet datasets, where it is shown that EnD^2 can approach the classification performance of an ensemble, and outperforms both standard DNNs and Ensemble Distillation on the tasks of misclassification and out-of-distribution input detection.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary: Ensembles of probabilistic models (e.g. for classification) provide measures of both data uncertainty (i.e. how uncertain each model is on average) and model uncertainty (i.e. how much the models disagree in their predictions). When naively distilling an ensemble to a single model, the ability to decompose total uncertainty into data uncertainty and model uncertainty is lost. This paper describes a method for ensemble distillation that retains both types of uncertainty in the case of classification. The idea is to train a prior network (i.e. a conditional Dirichlet distribution) to model the distribution of categorical probabilities within the ensemble. Both total uncertainty and data uncertainty can then be computed analytically from the prior network. Pros: The paper considers an interesting problem, and makes a clear contribution towards addressing it. The paper motivates the problem well, and explains the contribution and the method*s limitations clearly. The proposed method is simple, but well motivated, sound, and well explained. The paper is very well written and easy to read. I particularly appreciated the toy experiment in section 4 and the visualization in figure 3, which showcase clearly what the method does. The experiments are thorough, and the results are discussed fairly and objectively. Cons: The scope of the paper and the method is limited to the problem of probabilistic classification. However, one could have a more general ensemble of conditional or unconditional distributions. The method could in principle be applied to this setting, by having a hypernetwork learn the distribution of the distributional parameters of the models in the ensemble (the method presented in the paper is a special case of this, where the hypernetwork is a prior network and the distributional parameters are categorical probabilities). However, it*s not clear that the method would scale to models with arbitrary distributional parameters. I would suggest that the paper make it clear from the beginning that the scope is probabilistic classification, and at the end discuss the extent to which this is a limitation, and how the method could potentially be extended to other kinds of ensembles. The prior network used is essentially a conditional Dirichlet distribution, which (as the paper clearly acknowledges) may not always be flexible enough. A more flexible prior network could be a mixture of Dirichlet distributions, where the mixing coefficients and the parameters of each mixture component would be functions of the input, similarly to mixture density networks (http://publications.aston.ac.uk/id/eprint/373/) but with Dirichets instead of Gaussians. I believe that equations (9) and (10) would still be tractable in that case, as it*s tractable to compute expectations under mixtures if the expectations under mixture components are tractable. One limitation of the approach is that the prior network may not give accurate predictions for inputs it hasn*t been trained on (as the paper discusses and section 4 clearly demonstrates). It*s not clear how this problem can be overcome in general, and further research may be needed in that direction. Some of the results in Table 4 are puzzling (as the paper also acknowledges). In particular, the EnD model should be able to retain the performance of the ensemble when using total uncertainty but it doesn*t. Also, using knowledge uncertainty doesn*t always seem to be better than using total uncertainty, which to some extent defeats the purpose of the method (at least in this particular example). It would be good to investigate further these results. In any case, I appreciate that the paper acknowledges these results, but avoids unjustified speculation about what may be causing them. Decision: Overall, this is good work and I*m happy to recommend acceptance. There are some limitations to the method, but these can be seen as motivation for future work. Minor suggestions for improvement: This older work is relevant and could be cited (but there is no obligation to do so). On compressing ensembles: - Model compression, https://dl.acm.org/citation.cfm?id=1150464 - Compact approximations to Bayesian predictive distributions, https://dl.acm.org/citation.cfm?id=1102457 - Distilling model knowledge, https://arxiv.org/abs/1510.02437 On information-theoretic measures for decomposing total uncertainty as in eq. (4): - Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning, https://arxiv.org/abs/1710.07283 *[Knowledge uncertainty] arises when the test input comes from a different distribution than the one that generated the training data* This is only one way knowledge uncertainty can arise, it could also arise when the test input comes from the same distribution that generated training data, but there aren*t enough training data available. Some parentheses are dangling at the bottom of page 2, top of page 3 and middle of page 4. In figure 1, it*d be good to make clear in the caption that the figure is illustrating the simplex of categorical probabilities. The last paragraph of section 2 uses the term *posterior* repeatedly to refer to P(y|x, \theta) which is confusing. I would call P(y|x, \theta) the *model prediction* or something like that. Eq. (5) should be without the negative sign I think. In section 3, I would use a different symbol (e.g. phi) to denote the parameters of the prior network, to clearly distinguish them from the parameters of the models in the ensemble. *Optimization of the KL-divergence between distributions with limited non-zero common support is particularly difficult* Some more evidence in support of this claim is needed I think; either explain why or provide a reference that does. In section 4, the text says that EnD has *a minor degradation in performance* but table 1 seems to show otherwise. Also, the results of EnD in table 1 and table 2 are different, which makes me think there may be a typo somewhere. Making the references have consistent format and correct capitalization (e.g. DNNs, Bayesian) would make the paper look even more polished.", "role": "assistant" } ]
[ "Summary: Ensembles of probabilistic models (e.g.", "for classification) provide measures of both data uncertainty (i.e.", "how uncertain each model is on average) and model uncertainty (i.e.", "how much the models disagree in their predictions).", "When naively distilling an ensemble to a single model, the ability to decompose total uncertainty into data uncertainty and model uncertainty is lost.", "This paper describes a method for ensemble distillation that retains both types of uncertainty in the case of classification.", "The idea is to train a prior network (i.e.", "a conditional Dirichlet distribution) to model the distribution of categorical probabilities within the ensemble.", "Both total uncertainty and data uncertainty can then be computed analytically from the prior network.", "Pros: The paper considers an interesting problem, and makes a clear contribution towards addressing it.", "The paper motivates the problem well, and explains the contribution and the method*s limitations clearly.", "The proposed method is simple, but well motivated, sound, and well explained.", "The paper is very well written and easy to read.", "I particularly appreciated the toy experiment in section 4 and the visualization in figure 3, which showcase clearly what the method does.", "The experiments are thorough, and the results are discussed fairly and objectively.", "Cons: The scope of the paper and the method is limited to the problem of probabilistic classification.", "However, one could have a more general ensemble of conditional or unconditional distributions.", "The method could in principle be applied to this setting, by having a hypernetwork learn the distribution of the distributional parameters of the models in the ensemble (the method presented in the paper is a special case of this, where the hypernetwork is a prior network and the distributional parameters are categorical probabilities).", "However, it*s not clear that the method would scale to models with arbitrary distributional parameters.", "I would suggest that the paper make it clear from the beginning that the scope is probabilistic classification, and at the end discuss the extent to which this is a limitation, and how the method could potentially be extended to other kinds of ensembles.", "The prior network used is essentially a conditional Dirichlet distribution, which (as the paper clearly acknowledges) may not always be flexible enough.", "A more flexible prior network could be a mixture of Dirichlet distributions, where the mixing coefficients and the parameters of each mixture component would be functions of the input, similarly to mixture density networks (http://publications.aston.ac.uk/id/eprint/373/) but with Dirichets instead of Gaussians.", "I believe that equations (9) and (10) would still be tractable in that case, as it*s tractable to compute expectations under mixtures if the expectations under mixture components are tractable.", "One limitation of the approach is that the prior network may not give accurate predictions for inputs it hasn*t been trained on (as the paper discusses and section 4 clearly demonstrates).", "It*s not clear how this problem can be overcome in general, and further research may be needed in that direction.", "Some of the results in Table 4 are puzzling (as the paper also acknowledges).", "In particular, the EnD model should be able to retain the performance of the ensemble when using total uncertainty but it doesn*t. Also, using knowledge uncertainty doesn*t always seem to be better than using total uncertainty, which to some extent defeats the purpose of the method (at least in this particular example).", "It would be good to investigate further these results.", "In any case, I appreciate that the paper acknowledges these results, but avoids unjustified speculation about what may be causing them.", "Decision: Overall, this is good work and I*m happy to recommend acceptance.", "There are some limitations to the method, but these can be seen as motivation for future work.", "Minor suggestions for improvement: This older work is relevant and could be cited (but there is no obligation to do so).", "On compressing ensembles: - Model compression, https://dl.acm.org/citation.cfm?id=1150464 - Compact approximations to Bayesian predictive distributions, https://dl.acm.org/citation.cfm?id=1102457 - Distilling model knowledge, https://arxiv.org/abs/1510.02437 On information-theoretic measures for decomposing total uncertainty as in eq.", "(4): - Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning, https://arxiv.org/abs/1710.07283 *[Knowledge uncertainty] arises when the test input comes from a different distribution than the one that generated the training data* This is only one way knowledge uncertainty can arise, it could also arise when the test input comes from the same distribution that generated training data, but there aren*t enough training data available.", "Some parentheses are dangling at the bottom of page 2, top of page 3 and middle of page 4.", "In figure 1, it*d be good to make clear in the caption that the figure is illustrating the simplex of categorical probabilities.", "The last paragraph of section 2 uses the term *posterior* repeatedly to refer to P(y|x, \theta) which is confusing.", "I would call P(y|x, \theta) the *model prediction* or something like that.", "Eq.", "(5) should be without the negative sign I think.", "In section 3, I would use a different symbol (e.g.", "phi) to denote the parameters of the prior network, to clearly distinguish them from the parameters of the models in the ensemble.", "*Optimization of the KL-divergence between distributions with limited non-zero common support is particularly difficult* Some more evidence in support of this claim is needed I think; either explain why or provide a reference that does.", "In section 4, the text says that EnD has *a minor degradation in performance* but table 1 seems to show otherwise.", "Also, the results of EnD in table 1 and table 2 are different, which makes me think there may be a typo somewhere.", "Making the references have consistent format and correct capitalization (e.g.", "DNNs, Bayesian) would make the paper look even more polished." ]
{ "criticism": 9, "example": 6, "importance_and_relevance": 6, "materials_and_methods": 29, "praise": 9, "presentation_and_reporting": 14, "results_and_discussion": 16, "suggestion_and_solution": 16, "total": 47 }
0.191489
0.12766
0.12766
0.617021
0.191489
0.297872
0.340426
0.340426
47
1
1
1
1
1
1
1
1
2.234043
-16.010446
18.244489
Counterfactuals uncover the modular structure of deep generative models
Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data. However, manipulating such representation to perform meaningful and controllable transformations in the data space remains challenging without some form of supervision. While previous work has focused on exploiting statistical independence to extit{disentangle} latent factors, we argue that such requirement can be advantageously relaxed and propose instead a non-statistical framework that relies on identifying a modular organization of the network, based on counterfactual manipulations. Our experiments support that modularity between groups of channels is achieved to a certain degree on a variety of generative models. This allowed the design of targeted interventions on complex image datasets, opening the way to applications such as computationally efficient style transfer and the automated assessment of robustness to contextual changes in pattern recognition systems.
The ideas presented in the paper are interesting and original. Whereas the theory presented has a lot of potential, it seems that the clarity of the paper could be greatly improved, in particular I would have liked more of the formal theory to be included in the body of the paper instead of relying only on the appendix. This especially matters since the theoretical aspect is a key contribution of the paper and the experimental section remains on the light side (it presents mostly examples and lacks more extensive results). I find the introduction of the proposed definition of disentanglement in sections 2.2/2.3 confusing. The authors first define “extrinsic” disentanglement of a transformation in the data space as corresponding to a transformation of one dimension only in the latent space. In section 2.3 a transformation is called “intrinsically” disentangled if it corresponds to a transformation of a subset of variables in the space of endogenous variables. It should be made clearer from the start that disentanglement is here only a property of the transformations and that the authors are not trying to define a disentangled internal representation. Further, some important questions like how to choose the reference endogenous variables and how to choose the subset E are left entirely to the experiments section. The definition of disentanglement proposed is however tied to these choices and a quick discussion would be helpful. Whereas the theory from section 2 seems precise and formal (at least in the appendix, although I did not check all the proofs), the procedure to identify modules comes with no guarantees and relies on several choices: local averaging, thresholding, nbr of clusters (the choice of this one is in my opinion well justified). Taking that into account a more extensive experimental validation would be needed to demonstrate that modules can be reliably identified. The results presented on CelebA and ImageNet are interesting, in particular using different models is a good idea, however the evaluation relies mostly on a few cases or examples and I would have liked to see more quantitative results, e.g. like in Figure 8 Appendix F. Note on related work: It has been shown (Isolating Sources of Disentanglement in VAEs by Duvenaud et al., Disentangling by Factorising by Kim et al., Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations by Bachem et al.) that Beta-VAE is far from optimal for “extrinsic” disentanglement, the text in section 4.1 should take these results into account. It would also be interesting to contrast with the following paper: Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness by Bauer et al. which (whilst doing something pretty different) also treats of causality and disentanglement.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCounterfactuals uncover the modular structure of deep generative models\nDeep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data. However, manipulating such representation to perform meaningful and controllable transformations in the data space remains challenging without some form of supervision. While previous work has focused on exploiting statistical independence to \textit{disentangle} latent factors, we argue that such requirement can be advantageously relaxed and propose instead a non-statistical framework that relies on identifying a modular organization of the network, based on counterfactual manipulations. Our experiments support that modularity between groups of channels is achieved to a certain degree on a variety of generative models. This allowed the design of targeted interventions on complex image datasets, opening the way to applications such as computationally efficient style transfer and the automated assessment of robustness to contextual changes in pattern recognition systems.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe ideas presented in the paper are interesting and original. Whereas the theory presented has a lot of potential, it seems that the clarity of the paper could be greatly improved, in particular I would have liked more of the formal theory to be included in the body of the paper instead of relying only on the appendix. This especially matters since the theoretical aspect is a key contribution of the paper and the experimental section remains on the light side (it presents mostly examples and lacks more extensive results). I find the introduction of the proposed definition of disentanglement in sections 2.2/2.3 confusing. The authors first define “extrinsic” disentanglement of a transformation in the data space as corresponding to a transformation of one dimension only in the latent space. In section 2.3 a transformation is called “intrinsically” disentangled if it corresponds to a transformation of a subset of variables in the space of endogenous variables. It should be made clearer from the start that disentanglement is here only a property of the transformations and that the authors are not trying to define a disentangled internal representation. Further, some important questions like how to choose the reference endogenous variables and how to choose the subset E are left entirely to the experiments section. The definition of disentanglement proposed is however tied to these choices and a quick discussion would be helpful. Whereas the theory from section 2 seems precise and formal (at least in the appendix, although I did not check all the proofs), the procedure to identify modules comes with no guarantees and relies on several choices: local averaging, thresholding, nbr of clusters (the choice of this one is in my opinion well justified). Taking that into account a more extensive experimental validation would be needed to demonstrate that modules can be reliably identified. The results presented on CelebA and ImageNet are interesting, in particular using different models is a good idea, however the evaluation relies mostly on a few cases or examples and I would have liked to see more quantitative results, e.g. like in Figure 8 Appendix F. Note on related work: It has been shown (Isolating Sources of Disentanglement in VAEs by Duvenaud et al., Disentangling by Factorising by Kim et al., Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations by Bachem et al.) that Beta-VAE is far from optimal for “extrinsic” disentanglement, the text in section 4.1 should take these results into account. It would also be interesting to contrast with the following paper: Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness by Bauer et al. which (whilst doing something pretty different) also treats of causality and disentanglement.", "role": "assistant" } ]
[ "The ideas presented in the paper are interesting and original.", "Whereas the theory presented has a lot of potential, it seems that the clarity of the paper could be greatly improved, in particular I would have liked more of the formal theory to be included in the body of the paper instead of relying only on the appendix.", "This especially matters since the theoretical aspect is a key contribution of the paper and the experimental section remains on the light side (it presents mostly examples and lacks more extensive results).", "I find the introduction of the proposed definition of disentanglement in sections 2.2/2.3 confusing.", "The authors first define “extrinsic” disentanglement of a transformation in the data space as corresponding to a transformation of one dimension only in the latent space.", "In section 2.3 a transformation is called “intrinsically” disentangled if it corresponds to a transformation of a subset of variables in the space of endogenous variables.", "It should be made clearer from the start that disentanglement is here only a property of the transformations and that the authors are not trying to define a disentangled internal representation.", "Further, some important questions like how to choose the reference endogenous variables and how to choose the subset E are left entirely to the experiments section.", "The definition of disentanglement proposed is however tied to these choices and a quick discussion would be helpful.", "Whereas the theory from section 2 seems precise and formal (at least in the appendix, although I did not check all the proofs), the procedure to identify modules comes with no guarantees and relies on several choices: local averaging, thresholding, nbr of clusters (the choice of this one is in my opinion well justified).", "Taking that into account a more extensive experimental validation would be needed to demonstrate that modules can be reliably identified.", "The results presented on CelebA and ImageNet are interesting, in particular using different models is a good idea, however the evaluation relies mostly on a few cases or examples and I would have liked to see more quantitative results, e.g.", "like in Figure 8 Appendix F. Note on related work: It has been shown (Isolating Sources of Disentanglement in VAEs by Duvenaud et al., Disentangling by Factorising by Kim et al., Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations by Bachem et al.)", "that Beta-VAE is far from optimal for “extrinsic” disentanglement, the text in section 4.1 should take these results into account.", "It would also be interesting to contrast with the following paper: Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness by Bauer et al.", "which (whilst doing something pretty different) also treats of causality and disentanglement." ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 4, "materials_and_methods": 7, "praise": 5, "presentation_and_reporting": 3, "results_and_discussion": 4, "suggestion_and_solution": 7, "total": 16 }
0.1875
0.0625
0.25
0.4375
0.3125
0.1875
0.25
0.4375
16
1
1
1
1
1
1
1
1
2.125
1.982958
0.142042
A Constructive Prediction of the Generalization Error Across Scales
The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks. Nevertheless, the functional form of this dependency remains elusive. In this work, we present a functional form which approximates well the generalization error in practice. Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales. Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks. We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.
This work proposes a functional form for the relationship between <dataset size, model size> and generalization error, and performs an empirical study to validate it. First, it states 5 criteria that such a functional form must take, and proposes one such functional form containing 6 free coefficients that satisfy all these criteria. It then performs a rigorous empirical study consisting of 6 image datasets and 3 text datasets, each with 2 distinct architectures defined at several model scales, and trained with different dataset sizes. This process produces 42-49 data points for each <dataset, architecture> pair, and the 6 coefficients of the proposed functional form are fit to those data points, with < 2% mean deviation in accuracy. It then studies how this functional form performs at extrapolation, and finds that it still performs pretty well, with ~4.5% mean deviation in accuracy, but with additional caveats. Decision: Accept. This paper states 5 necessary criteria for any functional form for generalization error predictor that jointly considers dataset size and model size, then empirically verifies it with multiple datasets and architectures. These criteria are well justified, and can be used by others to narrow down the search for functions that approximate the generalization error of NNs without access to the true data distribution, which is a significant contribution. The empirical study is carefully done (e.g., taking care to subsample the dataset in a way that preserves the class distribution). I also liked that the paper is candid about its own limitations. A weakness that one might perceive is that the coefficients of the proposed functional form still needs to be fit to 40-ish trained NNs for every dataset and training hyperparameters, but I do not think this should be held against this work, because a generalization error predictor (let alone its functional form) that works for multiple datasets and architecture without training is difficult, and the paper does include several proposals for how this can still be used in practice. (Caveat: the use of the envelope function described in equation 5 (page 6) is not something I am familiar with, but seems reasonable.) Issues to address: - Fitting 6 parameters to 42-49 data points raises concerns about overfitting. Consider doing cross validation over those 42-49 data points, and report the mean of deviations computed on the test folds. The extrapolation section did provide evidence that there probably isn*t /that/ much overfitting, but cross validation would directly address this concern. - In addition, the paper provides the standard deviation for the mean deviations over 100 fits of the function as the measure of its uncertainty, but I suspect that the optimizer converging to different coefficients at different runs isn*t the main source of uncertainty. A bigger source of uncertainty is likely due to there being a limited amount of data to fit the coefficients to. Taking the standard deviation over the deviations measured on different folds of the data would be better measure of uncertainty. Minor issues: - Page 8: *differntiable methods for NAS.* differentiable is misspelled.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Constructive Prediction of the Generalization Error Across Scales\nThe dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks. Nevertheless, the functional form of this dependency remains elusive. In this work, we present a functional form which approximates well the generalization error in practice. Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales. Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks. We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work proposes a functional form for the relationship between <dataset size, model size> and generalization error, and performs an empirical study to validate it. First, it states 5 criteria that such a functional form must take, and proposes one such functional form containing 6 free coefficients that satisfy all these criteria. It then performs a rigorous empirical study consisting of 6 image datasets and 3 text datasets, each with 2 distinct architectures defined at several model scales, and trained with different dataset sizes. This process produces 42-49 data points for each <dataset, architecture> pair, and the 6 coefficients of the proposed functional form are fit to those data points, with < 2% mean deviation in accuracy. It then studies how this functional form performs at extrapolation, and finds that it still performs pretty well, with ~4.5% mean deviation in accuracy, but with additional caveats. Decision: Accept. This paper states 5 necessary criteria for any functional form for generalization error predictor that jointly considers dataset size and model size, then empirically verifies it with multiple datasets and architectures. These criteria are well justified, and can be used by others to narrow down the search for functions that approximate the generalization error of NNs without access to the true data distribution, which is a significant contribution. The empirical study is carefully done (e.g., taking care to subsample the dataset in a way that preserves the class distribution). I also liked that the paper is candid about its own limitations. A weakness that one might perceive is that the coefficients of the proposed functional form still needs to be fit to 40-ish trained NNs for every dataset and training hyperparameters, but I do not think this should be held against this work, because a generalization error predictor (let alone its functional form) that works for multiple datasets and architecture without training is difficult, and the paper does include several proposals for how this can still be used in practice. (Caveat: the use of the envelope function described in equation 5 (page 6) is not something I am familiar with, but seems reasonable.) Issues to address: - Fitting 6 parameters to 42-49 data points raises concerns about overfitting. Consider doing cross validation over those 42-49 data points, and report the mean of deviations computed on the test folds. The extrapolation section did provide evidence that there probably isn*t /that/ much overfitting, but cross validation would directly address this concern. - In addition, the paper provides the standard deviation for the mean deviations over 100 fits of the function as the measure of its uncertainty, but I suspect that the optimizer converging to different coefficients at different runs isn*t the main source of uncertainty. A bigger source of uncertainty is likely due to there being a limited amount of data to fit the coefficients to. Taking the standard deviation over the deviations measured on different folds of the data would be better measure of uncertainty. Minor issues: - Page 8: *differntiable methods for NAS.* differentiable is misspelled.", "role": "assistant" } ]
[ "This work proposes a functional form for the relationship between <dataset size, model size> and generalization error, and performs an empirical study to validate it.", "First, it states 5 criteria that such a functional form must take, and proposes one such functional form containing 6 free coefficients that satisfy all these criteria.", "It then performs a rigorous empirical study consisting of 6 image datasets and 3 text datasets, each with 2 distinct architectures defined at several model scales, and trained with different dataset sizes.", "This process produces 42-49 data points for each <dataset, architecture> pair, and the 6 coefficients of the proposed functional form are fit to those data points, with < 2% mean deviation in accuracy.", "It then studies how this functional form performs at extrapolation, and finds that it still performs pretty well, with ~4.5% mean deviation in accuracy, but with additional caveats.", "Decision: Accept.", "This paper states 5 necessary criteria for any functional form for generalization error predictor that jointly considers dataset size and model size, then empirically verifies it with multiple datasets and architectures.", "These criteria are well justified, and can be used by others to narrow down the search for functions that approximate the generalization error of NNs without access to the true data distribution, which is a significant contribution.", "The empirical study is carefully done (e.g., taking care to subsample the dataset in a way that preserves the class distribution).", "I also liked that the paper is candid about its own limitations.", "A weakness that one might perceive is that the coefficients of the proposed functional form still needs to be fit to 40-ish trained NNs for every dataset and training hyperparameters, but I do not think this should be held against this work, because a generalization error predictor (let alone its functional form) that works for multiple datasets and architecture without training is difficult, and the paper does include several proposals for how this can still be used in practice.", "(Caveat: the use of the envelope function described in equation 5 (page 6) is not something I am familiar with, but seems reasonable.)", "Issues to address: - Fitting 6 parameters to 42-49 data points raises concerns about overfitting.", "Consider doing cross validation over those 42-49 data points, and report the mean of deviations computed on the test folds.", "The extrapolation section did provide evidence that there probably isn*t /that/ much overfitting, but cross validation would directly address this concern.", "- In addition, the paper provides the standard deviation for the mean deviations over 100 fits of the function as the measure of its uncertainty, but I suspect that the optimizer converging to different coefficients at different runs isn*t the main source of uncertainty.", "A bigger source of uncertainty is likely due to there being a limited amount of data to fit the coefficients to.", "Taking the standard deviation over the deviations measured on different folds of the data would be better measure of uncertainty.", "Minor issues: - Page 8: *differntiable methods for NAS.", "* differentiable is misspelled." ]
{ "criticism": 6, "example": 2, "importance_and_relevance": 1, "materials_and_methods": 17, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 7, "suggestion_and_solution": 5, "total": 20 }
0.3
0.1
0.05
0.85
0.15
0.15
0.35
0.25
20
1
1
1
1
1
1
1
1
2.2
1.426661
0.773339
NAS evaluation is frustratingly hard
Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods. We provide the code used for our experiments at link-to-come.
The paper scrutinizes commonly used evaluation strategies for neural architecture search. The first contribution is to compare architectures found by 5 different search strategies from the literature against randomly sampled architectures from the same underlying search space. The paper shows that across different image classification datasets, the most neural architecture search methods are not consistently finding solutions that achieve a better performance than random architectures. The second major contribution of the paper is to show that the state-of-the-art performance achieved by the many neural architecture search methods can be largely attributed to advanced regularization tricks. In general the paper is well written and easy to follow. The paper sheds a rather grim light on the current state of neural architecture search, but I think it could raise awareness of common pitfalls and help to make future work more rigorous. While poorly designed search spaces is maybe a problem that many people in the community are aware of, this is, to the best of my knowledge, the first paper that systematically shows that for several commonly used search spaces there is not much to be optimized. Besides that, the paper shows that, maybe not surprisingly, the training protocol seems to be more important than the actual architecture search, which I also found rather worrisome. It would be nice, if Figure 3 could include another bar that shows the performance of a manually designed architecture, for example a residual network, trained with the same regularization techniques. A caveat of the paper is that mostly local methods with weight-sharing are considered, instead of more global methods, such as evolutionary algorithms or Bayesian optimization, which also showed strong performance on neural architecture search problems in the past. Furthermore, the paper doesn*t mention some recent work in neural architecture search that present more thoroughly designed search spaces, e.g Ying et al. It would also be helpful if the paper could elaborate, on how better search spaces could be designed. Nas-bench-101: Towards reproducible neural architecture search C Ying, A Klein, E Real, E Christiansen, K Murphy, F Hutter ICML 2019 Minor comments: - Section 4.1 How are the hyperparameters of drop path probability, cutout length and auxiliary tower weight chosen? post rebuttal ------------------ I thank the authors for taking the time to answer my questions and performing the additional experiments. The paper highlights two important issues in the current NAS literature: evaluating methods with different search spaces and non-standardised training protocols. I do think that the paper makes an important contribution which hopefully helps future work to over come these flaws.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNAS evaluation is frustratingly hard\nNeural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods. We provide the code used for our experiments at link-to-come.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper scrutinizes commonly used evaluation strategies for neural architecture search. The first contribution is to compare architectures found by 5 different search strategies from the literature against randomly sampled architectures from the same underlying search space. The paper shows that across different image classification datasets, the most neural architecture search methods are not consistently finding solutions that achieve a better performance than random architectures. The second major contribution of the paper is to show that the state-of-the-art performance achieved by the many neural architecture search methods can be largely attributed to advanced regularization tricks. In general the paper is well written and easy to follow. The paper sheds a rather grim light on the current state of neural architecture search, but I think it could raise awareness of common pitfalls and help to make future work more rigorous. While poorly designed search spaces is maybe a problem that many people in the community are aware of, this is, to the best of my knowledge, the first paper that systematically shows that for several commonly used search spaces there is not much to be optimized. Besides that, the paper shows that, maybe not surprisingly, the training protocol seems to be more important than the actual architecture search, which I also found rather worrisome. It would be nice, if Figure 3 could include another bar that shows the performance of a manually designed architecture, for example a residual network, trained with the same regularization techniques. A caveat of the paper is that mostly local methods with weight-sharing are considered, instead of more global methods, such as evolutionary algorithms or Bayesian optimization, which also showed strong performance on neural architecture search problems in the past. Furthermore, the paper doesn*t mention some recent work in neural architecture search that present more thoroughly designed search spaces, e.g Ying et al. It would also be helpful if the paper could elaborate, on how better search spaces could be designed. Nas-bench-101: Towards reproducible neural architecture search C Ying, A Klein, E Real, E Christiansen, K Murphy, F Hutter ICML 2019 Minor comments: - Section 4.1 How are the hyperparameters of drop path probability, cutout length and auxiliary tower weight chosen? post rebuttal ------------------ I thank the authors for taking the time to answer my questions and performing the additional experiments. The paper highlights two important issues in the current NAS literature: evaluating methods with different search spaces and non-standardised training protocols. I do think that the paper makes an important contribution which hopefully helps future work to over come these flaws.", "role": "assistant" } ]
[ "The paper scrutinizes commonly used evaluation strategies for neural architecture search.", "The first contribution is to compare architectures found by 5 different search strategies from the literature against randomly sampled architectures from the same underlying search space.", "The paper shows that across different image classification datasets, the most neural architecture search methods are not consistently finding solutions that achieve a better performance than random architectures.", "The second major contribution of the paper is to show that the state-of-the-art performance achieved by the many neural architecture search methods can be largely attributed to advanced regularization tricks.", "In general the paper is well written and easy to follow.", "The paper sheds a rather grim light on the current state of neural architecture search, but I think it could raise awareness of common pitfalls and help to make future work more rigorous.", "While poorly designed search spaces is maybe a problem that many people in the community are aware of, this is, to the best of my knowledge, the first paper that systematically shows that for several commonly used search spaces there is not much to be optimized.", "Besides that, the paper shows that, maybe not surprisingly, the training protocol seems to be more important than the actual architecture search, which I also found rather worrisome.", "It would be nice, if Figure 3 could include another bar that shows the performance of a manually designed architecture, for example a residual network, trained with the same regularization techniques.", "A caveat of the paper is that mostly local methods with weight-sharing are considered, instead of more global methods, such as evolutionary algorithms or Bayesian optimization, which also showed strong performance on neural architecture search problems in the past.", "Furthermore, the paper doesn*t mention some recent work in neural architecture search that present more thoroughly designed search spaces, e.g Ying et al.", "It would also be helpful if the paper could elaborate, on how better search spaces could be designed.", "Nas-bench-101: Towards reproducible neural architecture search C Ying, A Klein, E Real, E Christiansen, K Murphy, F Hutter ICML 2019 Minor comments: - Section 4.1 How are the hyperparameters of drop path probability, cutout length and auxiliary tower weight chosen?", "post rebuttal ------------------ I thank the authors for taking the time to answer my questions and performing the additional experiments.", "The paper highlights two important issues in the current NAS literature: evaluating methods with different search spaces and non-standardised training protocols.", "I do think that the paper makes an important contribution which hopefully helps future work to over come these flaws." ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 8, "materials_and_methods": 12, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 0, "suggestion_and_solution": 3, "total": 16 }
0.25
0.0625
0.5
0.75
0.1875
0.1875
0
0.1875
16
1
1
1
1
1
1
0
1
2.125
1.982958
0.142042
NAS evaluation is frustratingly hard
Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods. We provide the code used for our experiments at link-to-come.
In this submission, the authors conduct a series of experiments on five image classification datasets to compare several existing NAS methods. Based on these experimental results, they point out: 1) how a network is trained (i.e., training protocols/tricks such as DropPath, Cutout) plays an important role for the final accuracy; 2) within the search space, the existing NAS methods perform close to or slightly better than a random sampling baseline; 3) hyperparameters of NAS methods also have significant effect on the performance. With these interesting findings, I suggest rejecting this submission. The reasons are as follows: 1) For the first finding of training protocol, several existing papers and books already discussed it, such as Li & Talwalkar (2019) and the book chapter {Neural Architecture Search} by Thomas, Jan Hendrik and Frank. 2) For the second finding of the search space and the performance of a randomly sampled architecture, existing work from Facebook AI Research group has studied this. And the existing work gives more experiments and discussion than this submission (from my own perspective). https://arxiv.org/pdf/1904.01569.pdf 3) The conducted experiments in this submission also have certain risks to support the claims/conclusions of it. For example, only datasets of image classification are adopted. Another factor is the hyper-parameter tuning (actually, the authors also mention this in the last paragraph on Page 4). All the compared methods, either NAS methods or random sample baseline, should receive the same training procedure to get a fair experimental comparison. The above mentioned existing work makes the contributions of this submission less, and the experimental results may not be convincing enough. These lead to a reject. However, a great point is made by the authors in the last paragraph of Section 6: hyperparameter of NAS methods should be either stable enough or counted toward the cost.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNAS evaluation is frustratingly hard\nNeural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods. We provide the code used for our experiments at link-to-come.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nIn this submission, the authors conduct a series of experiments on five image classification datasets to compare several existing NAS methods. Based on these experimental results, they point out: 1) how a network is trained (i.e., training protocols/tricks such as DropPath, Cutout) plays an important role for the final accuracy; 2) within the search space, the existing NAS methods perform close to or slightly better than a random sampling baseline; 3) hyperparameters of NAS methods also have significant effect on the performance. With these interesting findings, I suggest rejecting this submission. The reasons are as follows: 1) For the first finding of training protocol, several existing papers and books already discussed it, such as Li & Talwalkar (2019) and the book chapter {Neural Architecture Search} by Thomas, Jan Hendrik and Frank. 2) For the second finding of the search space and the performance of a randomly sampled architecture, existing work from Facebook AI Research group has studied this. And the existing work gives more experiments and discussion than this submission (from my own perspective). https://arxiv.org/pdf/1904.01569.pdf 3) The conducted experiments in this submission also have certain risks to support the claims/conclusions of it. For example, only datasets of image classification are adopted. Another factor is the hyper-parameter tuning (actually, the authors also mention this in the last paragraph on Page 4). All the compared methods, either NAS methods or random sample baseline, should receive the same training procedure to get a fair experimental comparison. The above mentioned existing work makes the contributions of this submission less, and the experimental results may not be convincing enough. These lead to a reject. However, a great point is made by the authors in the last paragraph of Section 6: hyperparameter of NAS methods should be either stable enough or counted toward the cost.", "role": "assistant" } ]
[ "In this submission, the authors conduct a series of experiments on five image classification datasets to compare several existing NAS methods.", "Based on these experimental results, they point out: 1) how a network is trained (i.e., training protocols/tricks such as DropPath, Cutout) plays an important role for the final accuracy; 2) within the search space, the existing NAS methods perform close to or slightly better than a random sampling baseline; 3) hyperparameters of NAS methods also have significant effect on the performance.", "With these interesting findings, I suggest rejecting this submission.", "The reasons are as follows: 1) For the first finding of training protocol, several existing papers and books already discussed it, such as Li & Talwalkar (2019) and the book chapter {Neural Architecture Search} by Thomas, Jan Hendrik and Frank.", "2) For the second finding of the search space and the performance of a randomly sampled architecture, existing work from Facebook AI Research group has studied this.", "And the existing work gives more experiments and discussion than this submission (from my own perspective).", "https://arxiv.org/pdf/1904.01569.pdf 3) The conducted experiments in this submission also have certain risks to support the claims/conclusions of it.", "For example, only datasets of image classification are adopted.", "Another factor is the hyper-parameter tuning (actually, the authors also mention this in the last paragraph on Page 4).", "All the compared methods, either NAS methods or random sample baseline, should receive the same training procedure to get a fair experimental comparison.", "The above mentioned existing work makes the contributions of this submission less, and the experimental results may not be convincing enough.", "These lead to a reject.", "However, a great point is made by the authors in the last paragraph of Section 6: hyperparameter of NAS methods should be either stable enough or counted toward the cost." ]
{ "criticism": 1, "example": 2, "importance_and_relevance": 2, "materials_and_methods": 11, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 6, "suggestion_and_solution": 3, "total": 13 }
0.076923
0.153846
0.153846
0.846154
0.076923
0.153846
0.461538
0.230769
13
1
1
1
1
1
1
1
1
2.153846
2.153846
0
Order Learning and Its Application to Age Estimation
We propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes. To this end, we design a pairwise comparator to categorize the relationship between two instances into one of three cases: one instance is `greater than,* `similar to,* or `smaller than* the other. Then, by comparing an input instance with reference instances and maximizing the consistency among the comparison results, the class of the input can be estimated reliably. We apply order learning to develop a facial age estimator, which provides the state-of-the-art performance. Moreover, the performance is further improved when the order graph is divided into disjoint chains using gender and ethnic group information or even in an unsupervised manner.
The paper proposes a method for learning partial orders based on learning fuzzy pairwise comparisons (smaller/greater/approximatively equal to), and the retaining of a set of representants in each chain in order to allow consistent result by consistency maximization. The method is applied to the problem of estimating the age based on faces. Extensions proposed are the learning of multiple disjoint chains on this dataset, manually partitioned or learned through an iterative assignment algorithm that has similarities with a soft expectation-maximization principle. Extensive experiments are done on the age estimation problem with comparison to exisiting approaches, and an application to aesthetic assessment is proposed in appendix. The paper is very well written and contains very extensive and relevant experiments. I only have minor concerns: 1) I couldn*t find the explanation for the underlined numbers in Table 2. 2) It would be good to include the details of Appendix C in the text, as is, it looks like an artificial space saving trick. 3) typo in Section 3.2: *we assume that the chain, to which the instance belong, is known* -> *we assume that the chain to which the instance belongs is known*
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOrder Learning and Its Application to Age Estimation\nWe propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes. To this end, we design a pairwise comparator to categorize the relationship between two instances into one of three cases: one instance is `greater than,* `similar to,* or `smaller than* the other. Then, by comparing an input instance with reference instances and maximizing the consistency among the comparison results, the class of the input can be estimated reliably. We apply order learning to develop a facial age estimator, which provides the state-of-the-art performance. Moreover, the performance is further improved when the order graph is divided into disjoint chains using gender and ethnic group information or even in an unsupervised manner.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a method for learning partial orders based on learning fuzzy pairwise comparisons (smaller/greater/approximatively equal to), and the retaining of a set of representants in each chain in order to allow consistent result by consistency maximization. The method is applied to the problem of estimating the age based on faces. Extensions proposed are the learning of multiple disjoint chains on this dataset, manually partitioned or learned through an iterative assignment algorithm that has similarities with a soft expectation-maximization principle. Extensive experiments are done on the age estimation problem with comparison to exisiting approaches, and an application to aesthetic assessment is proposed in appendix. The paper is very well written and contains very extensive and relevant experiments. I only have minor concerns: 1) I couldn*t find the explanation for the underlined numbers in Table 2. 2) It would be good to include the details of Appendix C in the text, as is, it looks like an artificial space saving trick. 3) typo in Section 3.2: *we assume that the chain, to which the instance belong, is known* -> *we assume that the chain to which the instance belongs is known*", "role": "assistant" } ]
[ "The paper proposes a method for learning partial orders based on learning fuzzy pairwise comparisons (smaller/greater/approximatively equal to), and the retaining of a set of representants in each chain in order to allow consistent result by consistency maximization.", "The method is applied to the problem of estimating the age based on faces.", "Extensions proposed are the learning of multiple disjoint chains on this dataset, manually partitioned or learned through an iterative assignment algorithm that has similarities with a soft expectation-maximization principle.", "Extensive experiments are done on the age estimation problem with comparison to exisiting approaches, and an application to aesthetic assessment is proposed in appendix.", "The paper is very well written and contains very extensive and relevant experiments.", "I only have minor concerns: 1) I couldn*t find the explanation for the underlined numbers in Table 2.", "2) It would be good to include the details of Appendix C in the text, as is, it looks like an artificial space saving trick.", "3) typo in Section 3.2: *we assume that the chain, to which the instance belong, is known* -> *we assume that the chain to which the instance belongs is known*" ]
{ "criticism": 1, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 4, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 8 }
0.125
0.125
0.25
0.625
0.25
0.5
0.25
0.125
8
1
1
1
1
1
1
1
1
2.25
1.855439
0.394561
Low-Resource Knowledge-Grounded Dialogue Generation
Responding with knowledge has been recognized as an important capability for an intelligent conversational agent. Yet knowledge-grounded dialogues, as training data for learning such a response generation model, are difficult to obtain. Motivated by the challenge in practice, we consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available. In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model. By this means, the major part of the model can be learned from a large number of ungrounded dialogues and unstructured documents, while the remaining small parameters can be well fitted using the limited training examples. Evaluation results on two benchmarks indicate that with only training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.
The authors propose a novel framework for training a knowledge-grounded dialogue model. They decouple the three main elements of such a model - a language model, a context processor, and a document reader - and as such can pretrain each component separately. They achieve state-of-the-art results on two benchmark datasets, and can additionally obtain near-state-of-the-art results while training on a fraction of the task data. Overall I think it’s a strong paper with a good set of experiments, baselines, and considers multiple datasets. Disentangling the model elements is a clever way to allow for more robust pre-training, and indeed yields favorable results. The authors show that just the pre-training aspect is not the root cause of their boost in performance, as a pre-trained baseline model fails to replicate their best results. The contribution is broadly applicable to other areas in which data collation is more difficult; the authors additionally do a good job of pointing out that their knowledge encoder is not limited to text but can also use other knowledge grounding including images, videos, or a knowledge-base. Finally, the authors detail thorough ablation studies for their models. One major thing missing from their ablation in Figure 2 is a setting when *no* pretraining is used. That would be much more comparable to the setting used for TMN, since that had *no* pretraining available to it. Alternative, adding pretraining to the baselines would be another good way to do this, which would help disentangle how much the architecture is helpful over the pretraining. Although the authors point out that a major advantage of their architecture is that we can separate the pretraining for each of the components, I would also be interested to see how they find the model doing if a single source of pretraining is used. I.e., only reddit pretrained weights for all 3 components, or only wikipedia etc. I definitely don’t think that holds back this paper, just think it would provide some evidence of the value gained utilizing the disparate sources.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLow-Resource Knowledge-Grounded Dialogue Generation\nResponding with knowledge has been recognized as an important capability for an intelligent conversational agent. Yet knowledge-grounded dialogues, as training data for learning such a response generation model, are difficult to obtain. Motivated by the challenge in practice, we consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available. In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model. By this means, the major part of the model can be learned from a large number of ungrounded dialogues and unstructured documents, while the remaining small parameters can be well fitted using the limited training examples. Evaluation results on two benchmarks indicate that with only training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a novel framework for training a knowledge-grounded dialogue model. They decouple the three main elements of such a model - a language model, a context processor, and a document reader - and as such can pretrain each component separately. They achieve state-of-the-art results on two benchmark datasets, and can additionally obtain near-state-of-the-art results while training on a fraction of the task data. Overall I think it’s a strong paper with a good set of experiments, baselines, and considers multiple datasets. Disentangling the model elements is a clever way to allow for more robust pre-training, and indeed yields favorable results. The authors show that just the pre-training aspect is not the root cause of their boost in performance, as a pre-trained baseline model fails to replicate their best results. The contribution is broadly applicable to other areas in which data collation is more difficult; the authors additionally do a good job of pointing out that their knowledge encoder is not limited to text but can also use other knowledge grounding including images, videos, or a knowledge-base. Finally, the authors detail thorough ablation studies for their models. One major thing missing from their ablation in Figure 2 is a setting when *no* pretraining is used. That would be much more comparable to the setting used for TMN, since that had *no* pretraining available to it. Alternative, adding pretraining to the baselines would be another good way to do this, which would help disentangle how much the architecture is helpful over the pretraining. Although the authors point out that a major advantage of their architecture is that we can separate the pretraining for each of the components, I would also be interested to see how they find the model doing if a single source of pretraining is used. I.e., only reddit pretrained weights for all 3 components, or only wikipedia etc. I definitely don’t think that holds back this paper, just think it would provide some evidence of the value gained utilizing the disparate sources.", "role": "assistant" } ]
[ "The authors propose a novel framework for training a knowledge-grounded dialogue model.", "They decouple the three main elements of such a model - a language model, a context processor, and a document reader - and as such can pretrain each component separately.", "They achieve state-of-the-art results on two benchmark datasets, and can additionally obtain near-state-of-the-art results while training on a fraction of the task data.", "Overall I think it’s a strong paper with a good set of experiments, baselines, and considers multiple datasets.", "Disentangling the model elements is a clever way to allow for more robust pre-training, and indeed yields favorable results.", "The authors show that just the pre-training aspect is not the root cause of their boost in performance, as a pre-trained baseline model fails to replicate their best results.", "The contribution is broadly applicable to other areas in which data collation is more difficult; the authors additionally do a good job of pointing out that their knowledge encoder is not limited to text but can also use other knowledge grounding including images, videos, or a knowledge-base.", "Finally, the authors detail thorough ablation studies for their models.", "One major thing missing from their ablation in Figure 2 is a setting when *no* pretraining is used.", "That would be much more comparable to the setting used for TMN, since that had *no* pretraining available to it.", "Alternative, adding pretraining to the baselines would be another good way to do this, which would help disentangle how much the architecture is helpful over the pretraining.", "Although the authors point out that a major advantage of their architecture is that we can separate the pretraining for each of the components, I would also be interested to see how they find the model doing if a single source of pretraining is used.", "I.e., only reddit pretrained weights for all 3 components, or only wikipedia etc.", "I definitely don’t think that holds back this paper, just think it would provide some evidence of the value gained utilizing the disparate sources." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 3, "materials_and_methods": 12, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 14 }
0.142857
0.071429
0.214286
0.857143
0.285714
0.142857
0.214286
0.285714
14
1
1
1
1
1
1
1
1
2.214286
2.198503
0.015782
Learning to Represent Programs with Property Signatures
We introduce the notion of property signatures, a representation for programs and program specifications meant for consumption by machine learning algorithms. Given a function with input type ?_in and output type ?_out, a property is a function of type: (?_in, ?_out) ? Bool that (informally) describes some simple property of the function under consideration. For instance, if ?_in and ?_out are both lists of the same type, one property might ask ‘is the input list the same length as the output list?’. If we have a list of such properties, we can evaluate them all for our function to get a list of outputs that we will call the property signature. Crucially, we can ‘guess’ the property signature for a function given only a set of input/output pairs meant to specify that function. We discuss several potential applications of property signatures and show experimentally that they can be used to improve over a baseline synthesizer so that it emits twice as many programs in less than one-tenth of the time.
** Summary The paper studies the problem of program synthesis from examples and it proposes the notion of property signature as a set of *features* describing the program that can be used to simplify the synthesis by combining together sub-programs with similar features. ** Evaluation The paper is outside my scope of research, so my review is an educated guess. The use of properties to summarize some features about the program and the possibility to evaluate them directly from samples seems a very interesting idea. As the authors mentioned, this may help in creating useful features and useful architectures to simplify the learning task. The concept of property signatures is very well introduced and explained. The authors also provide an extensive comparison to related work. The empirical results seem promising in showing how property signatures make the synthesis much faster and better. The downsides of the paper are: - While it is clear how to build property signatures, it is quite unclear to me how they simplify the generation of programs that combine smaller/simpler programs. - Sect 3.2 on how to learn useful properties is rather vague and it would need a much more detail explanation. - Although the authors release code for the paper, the description of the experiments seem rather shallow and it requires a much higher level of detail on how the learning process is set up and executed.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning to Represent Programs with Property Signatures\nWe introduce the notion of property signatures, a representation for programs and program specifications meant for consumption by machine learning algorithms. Given a function with input type ?_in and output type ?_out, a property is a function of type: (?_in, ?_out) ? Bool that (informally) describes some simple property of the function under consideration. For instance, if ?_in and ?_out are both lists of the same type, one property might ask ‘is the input list the same length as the output list?’. If we have a list of such properties, we can evaluate them all for our function to get a list of outputs that we will call the property signature. Crucially, we can ‘guess’ the property signature for a function given only a set of input/output pairs meant to specify that function. We discuss several potential applications of property signatures and show experimentally that they can be used to improve over a baseline synthesizer so that it emits twice as many programs in less than one-tenth of the time.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n** Summary The paper studies the problem of program synthesis from examples and it proposes the notion of property signature as a set of *features* describing the program that can be used to simplify the synthesis by combining together sub-programs with similar features. ** Evaluation The paper is outside my scope of research, so my review is an educated guess. The use of properties to summarize some features about the program and the possibility to evaluate them directly from samples seems a very interesting idea. As the authors mentioned, this may help in creating useful features and useful architectures to simplify the learning task. The concept of property signatures is very well introduced and explained. The authors also provide an extensive comparison to related work. The empirical results seem promising in showing how property signatures make the synthesis much faster and better. The downsides of the paper are: - While it is clear how to build property signatures, it is quite unclear to me how they simplify the generation of programs that combine smaller/simpler programs. - Sect 3.2 on how to learn useful properties is rather vague and it would need a much more detail explanation. - Although the authors release code for the paper, the description of the experiments seem rather shallow and it requires a much higher level of detail on how the learning process is set up and executed.", "role": "assistant" } ]
[ "** Summary The paper studies the problem of program synthesis from examples and it proposes the notion of property signature as a set of *features* describing the program that can be used to simplify the synthesis by combining together sub-programs with similar features.", "** Evaluation The paper is outside my scope of research, so my review is an educated guess.", "The use of properties to summarize some features about the program and the possibility to evaluate them directly from samples seems a very interesting idea.", "As the authors mentioned, this may help in creating useful features and useful architectures to simplify the learning task.", "The concept of property signatures is very well introduced and explained.", "The authors also provide an extensive comparison to related work.", "The empirical results seem promising in showing how property signatures make the synthesis much faster and better.", "The downsides of the paper are: - While it is clear how to build property signatures, it is quite unclear to me how they simplify the generation of programs that combine smaller/simpler programs.", "- Sect 3.2 on how to learn useful properties is rather vague and it would need a much more detail explanation.", "- Although the authors release code for the paper, the description of the experiments seem rather shallow and it requires a much higher level of detail on how the learning process is set up and executed." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 4, "praise": 5, "presentation_and_reporting": 4, "results_and_discussion": 1, "suggestion_and_solution": 5, "total": 10 }
0.3
0
0.3
0.4
0.5
0.4
0.1
0.5
10
1
0
1
1
1
1
1
1
2.5
2.357958
0.142042
Learning to Represent Programs with Property Signatures
We introduce the notion of property signatures, a representation for programs and program specifications meant for consumption by machine learning algorithms. Given a function with input type ?_in and output type ?_out, a property is a function of type: (?_in, ?_out) ? Bool that (informally) describes some simple property of the function under consideration. For instance, if ?_in and ?_out are both lists of the same type, one property might ask ‘is the input list the same length as the output list?’. If we have a list of such properties, we can evaluate them all for our function to get a list of outputs that we will call the property signature. Crucially, we can ‘guess’ the property signature for a function given only a set of input/output pairs meant to specify that function. We discuss several potential applications of property signatures and show experimentally that they can be used to improve over a baseline synthesizer so that it emits twice as many programs in less than one-tenth of the time.
I would like to be able to recommend accepting this paper, but I can*t. It describes two contributions to the community that could be valuable: 1: searcho, a programming language designed for studying program synthesis via search over programs 2: A method of automatically choosing interesting features for guided program search. The paper does not give enough evidence to ascertain the value of either contribution. There are now a large number of program synthesis works using ML, and a huge literature on program synthesis without ML. While many of ML works use a DSL as the testbed, surely the authors* feature selection method can be applied in some of these DSLs, allowing comparisons with current art? On the other hand, many of the ML methods don*t require the DSL used in the works that introduce them. Can searcho + your set of training and test programs distinguish between these methods, and establish a benchmark? Are there some more realistic tasks/applications that searcho makes reasonable, not readily approached with previous DSLs used for testing these algorithms? I think this paper could be valuable if it could demonstrate either in what ways the new feature selection algorithm is an improvement over prior ML methods, or that searcho is valuable as a benchmark or for approaching interesting applications. ###################################################################################### edit after author response: raising to 6, I think the deepcoder experiments are useful in contextualizing the contribution of the authors* feature selection method.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning to Represent Programs with Property Signatures\nWe introduce the notion of property signatures, a representation for programs and program specifications meant for consumption by machine learning algorithms. Given a function with input type ?_in and output type ?_out, a property is a function of type: (?_in, ?_out) ? Bool that (informally) describes some simple property of the function under consideration. For instance, if ?_in and ?_out are both lists of the same type, one property might ask ‘is the input list the same length as the output list?’. If we have a list of such properties, we can evaluate them all for our function to get a list of outputs that we will call the property signature. Crucially, we can ‘guess’ the property signature for a function given only a set of input/output pairs meant to specify that function. We discuss several potential applications of property signatures and show experimentally that they can be used to improve over a baseline synthesizer so that it emits twice as many programs in less than one-tenth of the time.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI would like to be able to recommend accepting this paper, but I can*t. It describes two contributions to the community that could be valuable: 1: searcho, a programming language designed for studying program synthesis via search over programs 2: A method of automatically choosing interesting features for guided program search. The paper does not give enough evidence to ascertain the value of either contribution. There are now a large number of program synthesis works using ML, and a huge literature on program synthesis without ML. While many of ML works use a DSL as the testbed, surely the authors* feature selection method can be applied in some of these DSLs, allowing comparisons with current art? On the other hand, many of the ML methods don*t require the DSL used in the works that introduce them. Can searcho + your set of training and test programs distinguish between these methods, and establish a benchmark? Are there some more realistic tasks/applications that searcho makes reasonable, not readily approached with previous DSLs used for testing these algorithms? I think this paper could be valuable if it could demonstrate either in what ways the new feature selection algorithm is an improvement over prior ML methods, or that searcho is valuable as a benchmark or for approaching interesting applications. ###################################################################################### edit after author response: raising to 6, I think the deepcoder experiments are useful in contextualizing the contribution of the authors* feature selection method.", "role": "assistant" } ]
[ "I would like to be able to recommend accepting this paper, but I can*t. It describes two contributions to the community that could be valuable: 1: searcho, a programming language designed for studying program synthesis via search over programs 2: A method of automatically choosing interesting features for guided program search.", "The paper does not give enough evidence to ascertain the value of either contribution.", "There are now a large number of program synthesis works using ML, and a huge literature on program synthesis without ML.", "While many of ML works use a DSL as the testbed, surely the authors* feature selection method can be applied in some of these DSLs, allowing comparisons with current art?", "On the other hand, many of the ML methods don*t require the DSL used in the works that introduce them.", "Can searcho + your set of training and test programs distinguish between these methods, and establish a benchmark?", "Are there some more realistic tasks/applications that searcho makes reasonable, not readily approached with previous DSLs used for testing these algorithms?", "I think this paper could be valuable if it could demonstrate either in what ways the new feature selection algorithm is an improvement over prior ML methods, or that searcho is valuable as a benchmark or for approaching interesting applications.", "###################################################################################### edit after author response: raising to 6, I think the deepcoder experiments are useful in contextualizing the contribution of the authors* feature selection method." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 7, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 4, "total": 9 }
0.111111
0
0.333333
0.777778
0.333333
0.111111
0.111111
0.444444
9
1
0
1
1
1
1
1
1
2.222222
1.969703
0.252519
Quantifying the Cost of Reliable Photo Authentication via High-Performance Learned Lossy Representations
Detection of photo manipulation relies on subtle statistical traces, notoriously removed by aggressive lossy compression employed online. We demonstrate that end-to-end modeling of complex photo dissemination channels allows for codec optimization with explicit provenance objectives. We design a lightweight trainable lossy image codec, that delivers competitive rate-distortion performance, on par with best hand-engineered alternatives, but has lower computational footprint on modern GPU-enabled platforms. Our results show that significant improvements in manipulation detection accuracy are possible at fractional costs in bandwidth/storage. Our codec improved the accuracy from 37% to 86% even at very low bit-rates, well below the practicality of JPEG (QF 20).
This paper presents a learned image compression method that is able to be robust under a variety of tasks. The results aren*t state of the art in terms of rate-distortion performance, but this paper has a very good analysis of the results, and has produced a very fast codec. In that sense, this is a very interesting paper that may lead to other fast methods (the other fast method they compared the runtime against - WaveOne never published a complete description). This paper should be likely accepted, but the authors should town down the claims a bit. The results presented do NOT show that this method is better than the best hand engineered approach, despite what they claim. Even compared to BPG, which is NOT state of the art, the results are a mixed bag. We would like to point out to the authors that the VVC codec has shown much stronger performance than BPG, and similarly the AV1 codec has surpassed the performance of BPG. Moreover, even Pik has also surpassed the performance of BPG, so just showing stronger performance than BPG is not grounds to make the claim that this method is superior to hand engineered approaches. Moreover, as I stated earlier, this method is not even better all the time, therefore weakening the claim. On the positives: - the paper fully describes the architecture, unlike WaveOne - the runtime numbers are impressive (as far as I know, there is no faster published method) - the authors consider applications other than compression performance (such as classification performance in forensic analysis) On the negatives, which I highly suggest that the authors fix if this paper is to be taken seriously by the community: - please be sure to explain that SSIM is computed in <RGB | grayscale> - please be more explicit about which loss is used during training for distortion (i.e., *we use MSE for the training loss, but stop training when SSIM converges*) - please provide PSNR numbers for the method; and ideally MS-SSIM (in decibels) instead of PSNR - please add other neural compression methods to the graphs - please clarify that you create a file and decode a file for each image used to create the graphs (very important topic), as opposed to using the estimated file size - tone down the claims w.r.t. beating classical codecs
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQuantifying the Cost of Reliable Photo Authentication via High-Performance Learned Lossy Representations\nDetection of photo manipulation relies on subtle statistical traces, notoriously removed by aggressive lossy compression employed online. We demonstrate that end-to-end modeling of complex photo dissemination channels allows for codec optimization with explicit provenance objectives. We design a lightweight trainable lossy image codec, that delivers competitive rate-distortion performance, on par with best hand-engineered alternatives, but has lower computational footprint on modern GPU-enabled platforms. Our results show that significant improvements in manipulation detection accuracy are possible at fractional costs in bandwidth/storage. Our codec improved the accuracy from 37% to 86% even at very low bit-rates, well below the practicality of JPEG (QF 20).", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a learned image compression method that is able to be robust under a variety of tasks. The results aren*t state of the art in terms of rate-distortion performance, but this paper has a very good analysis of the results, and has produced a very fast codec. In that sense, this is a very interesting paper that may lead to other fast methods (the other fast method they compared the runtime against - WaveOne never published a complete description). This paper should be likely accepted, but the authors should town down the claims a bit. The results presented do NOT show that this method is better than the best hand engineered approach, despite what they claim. Even compared to BPG, which is NOT state of the art, the results are a mixed bag. We would like to point out to the authors that the VVC codec has shown much stronger performance than BPG, and similarly the AV1 codec has surpassed the performance of BPG. Moreover, even Pik has also surpassed the performance of BPG, so just showing stronger performance than BPG is not grounds to make the claim that this method is superior to hand engineered approaches. Moreover, as I stated earlier, this method is not even better all the time, therefore weakening the claim. On the positives: - the paper fully describes the architecture, unlike WaveOne - the runtime numbers are impressive (as far as I know, there is no faster published method) - the authors consider applications other than compression performance (such as classification performance in forensic analysis) On the negatives, which I highly suggest that the authors fix if this paper is to be taken seriously by the community: - please be sure to explain that SSIM is computed in <RGB | grayscale> - please be more explicit about which loss is used during training for distortion (i.e., *we use MSE for the training loss, but stop training when SSIM converges*) - please provide PSNR numbers for the method; and ideally MS-SSIM (in decibels) instead of PSNR - please add other neural compression methods to the graphs - please clarify that you create a file and decode a file for each image used to create the graphs (very important topic), as opposed to using the estimated file size - tone down the claims w.r.t. beating classical codecs", "role": "assistant" } ]
[ "This paper presents a learned image compression method that is able to be robust under a variety of tasks.", "The results aren*t state of the art in terms of rate-distortion performance, but this paper has a very good analysis of the results, and has produced a very fast codec.", "In that sense, this is a very interesting paper that may lead to other fast methods (the other fast method they compared the runtime against - WaveOne never published a complete description).", "This paper should be likely accepted, but the authors should town down the claims a bit.", "The results presented do NOT show that this method is better than the best hand engineered approach, despite what they claim.", "Even compared to BPG, which is NOT state of the art, the results are a mixed bag.", "We would like to point out to the authors that the VVC codec has shown much stronger performance than BPG, and similarly the AV1 codec has surpassed the performance of BPG.", "Moreover, even Pik has also surpassed the performance of BPG, so just showing stronger performance than BPG is not grounds to make the claim that this method is superior to hand engineered approaches.", "Moreover, as I stated earlier, this method is not even better all the time, therefore weakening the claim.", "On the positives: - the paper fully describes the architecture, unlike WaveOne - the runtime numbers are impressive (as far as I know, there is no faster published method) - the authors consider applications other than compression performance (such as classification performance in forensic analysis) On the negatives, which I highly suggest that the authors fix if this paper is to be taken seriously by the community: - please be sure to explain that SSIM is computed in <RGB | grayscale> - please be more explicit about which loss is used during training for distortion (i.e., *we use MSE for the training loss, but stop training when SSIM converges*) - please provide PSNR numbers for the method; and ideally MS-SSIM (in decibels) instead of PSNR - please add other neural compression methods to the graphs - please clarify that you create a file and decode a file for each image used to create the graphs (very important topic), as opposed to using the estimated file size - tone down the claims w.r.t.", "beating classical codecs" ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 8, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 4, "suggestion_and_solution": 3, "total": 11 }
0.272727
0.090909
0.090909
0.727273
0.272727
0.090909
0.363636
0.272727
11
1
1
1
1
1
1
1
1
2.181818
2.118688
0.06313
On the Variance of the Adaptive Learning Rate and Beyond
The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate -- its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the efficacy and robustness of RAdam.
Authors propose a way to rectify the variance of the adaptive learning rate (RAdam) and apply the optimizer to applications in image classification, language modeling and neural machine translation. The experiments demonstrate not only a strong results over baseline Adam with warmup learning rate but the robustness of the optimizer. The authors additionally demonstrate the theoretical justification behind their optimizer, however I am not very qualified to make the judgement on the theory. Overall judging from the authors description of approach and experimental results, I recommend acceptance.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the Variance of the Adaptive Learning Rate and Beyond\nThe learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate -- its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the efficacy and robustness of RAdam.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nAuthors propose a way to rectify the variance of the adaptive learning rate (RAdam) and apply the optimizer to applications in image classification, language modeling and neural machine translation. The experiments demonstrate not only a strong results over baseline Adam with warmup learning rate but the robustness of the optimizer. The authors additionally demonstrate the theoretical justification behind their optimizer, however I am not very qualified to make the judgement on the theory. Overall judging from the authors description of approach and experimental results, I recommend acceptance.", "role": "assistant" } ]
[ "Authors propose a way to rectify the variance of the adaptive learning rate (RAdam) and apply the optimizer to applications in image classification, language modeling and neural machine translation.", "The experiments demonstrate not only a strong results over baseline Adam with warmup learning rate but the robustness of the optimizer.", "The authors additionally demonstrate the theoretical justification behind their optimizer, however I am not very qualified to make the judgement on the theory.", "Overall judging from the authors description of approach and experimental results, I recommend acceptance." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 4, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 4 }
0.25
0
0
1
0.25
0
0.75
0.25
4
1
0
0
1
1
0
1
1
2.5
1.221623
1.278377
Extracting and Leveraging Feature Interaction Interpretations
Recommendation is a prevalent application of machine learning that affects many users; therefore, it is crucial for recommender models to be accurate and interpretable. In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems. In particular, we propose to extract feature interaction interpretations from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes. By not assuming the structure of the recommender system, our approach can be used in general settings. In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction. We found that our interaction interpretations are both informative and predictive, i.e., significantly outperforming existing recommender models. What*s more, the same approach to interpreting interactions can provide new insights into domains even beyond recommendation.
This paper is focused on identifying/discovering feature interactions in blackbox models (with a focus on recommendations). Specifically the technique works by first corrupting datapoints and then using this *local dataset* to find interacting features via lasso-regularized multi-level perceptron approach (NID from Tsang et al). Using an expanded feature space and repeated calls to the above steps this is then used to find the most commonly occurring patterns. The empirical section is quite interesting, with some nice mix of qualitative and quantitative results. The quantitative results in particular are quite intriguing -- across recommendation and non-recommendation tasks. Overall the paper makes for an interesting read. On the whole I lean slightly positive -- largely due to the empirical section and the quantitative gains observed by adding the feature interactions as features to the model. That said I had quite a few concerns about the work: - In general the exposition was quite lacking in the methodological section. I had to re-read the paper a few times to be able to make out to fully understand the underlying methodology. I would make the overall pipeline very clear and explain out the MADEX pipeline very clearly. The feature expansion section (4.3) in particular was not very clear -- as to how it fit in with the rest of the pipeline and something that could do with more work. Nomenclature like source and target model are used somewhat arbitrarily and again can be clarified. - Another concern I had with the approach is how this would work if the underlying features itself were not known. Say for example you had a text understanding model -- the tokenization and vocabulary/OOV etc .. may all be components of the blackbox. Likewise for vision models, what features are being used may vary. By assuming (effectively perfect) information about the features used, this is no longer really a *blackbox* model. I would have wanted to see some deeper discussion on this topic. I also had some concerns about the scalability of dense features and their bucketization in this approach. I*m not entirely convinced the method would work as well and efficiently in such feature spaces. I would have wanted to see some more empirical evidence and understanding on this topic. If possible it would have also been great to understand higher-order interactions and the ability of the model to find them empirically. - One thing I wasn*t entirely certain of was the amount of novelty in the work since it leverages existing works like NID (Tsang et al) and LIME (Ribeiro et al) for some of the key aspects of the method. It would be nice to see some discussion on this front as well. - I also would have liked to see significance testing being performed in general across the experiments. - Another smaller concern stemmed from a lack of mention of the stopping criterion of the model. In particular I*m wondering how models were stopped further training / best checkpoint was picked. This would be good to elaborate on to make sure that the models were all compared fairly from a computational perspective. - I also had some concerns with the scalability of the approach in large features spaces but am willing to give the authors the benefit of doubt on this one based on the 3 hour number they quoted from their experiments (which indicates something that is not prohibitively slow)
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExtracting and Leveraging Feature Interaction Interpretations\nRecommendation is a prevalent application of machine learning that affects many users; therefore, it is crucial for recommender models to be accurate and interpretable. In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems. In particular, we propose to extract feature interaction interpretations from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes. By not assuming the structure of the recommender system, our approach can be used in general settings. In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction. We found that our interaction interpretations are both informative and predictive, i.e., significantly outperforming existing recommender models. What*s more, the same approach to interpreting interactions can provide new insights into domains even beyond recommendation.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper is focused on identifying/discovering feature interactions in blackbox models (with a focus on recommendations). Specifically the technique works by first corrupting datapoints and then using this *local dataset* to find interacting features via lasso-regularized multi-level perceptron approach (NID from Tsang et al). Using an expanded feature space and repeated calls to the above steps this is then used to find the most commonly occurring patterns. The empirical section is quite interesting, with some nice mix of qualitative and quantitative results. The quantitative results in particular are quite intriguing -- across recommendation and non-recommendation tasks. Overall the paper makes for an interesting read. On the whole I lean slightly positive -- largely due to the empirical section and the quantitative gains observed by adding the feature interactions as features to the model. That said I had quite a few concerns about the work: - In general the exposition was quite lacking in the methodological section. I had to re-read the paper a few times to be able to make out to fully understand the underlying methodology. I would make the overall pipeline very clear and explain out the MADEX pipeline very clearly. The feature expansion section (4.3) in particular was not very clear -- as to how it fit in with the rest of the pipeline and something that could do with more work. Nomenclature like source and target model are used somewhat arbitrarily and again can be clarified. - Another concern I had with the approach is how this would work if the underlying features itself were not known. Say for example you had a text understanding model -- the tokenization and vocabulary/OOV etc .. may all be components of the blackbox. Likewise for vision models, what features are being used may vary. By assuming (effectively perfect) information about the features used, this is no longer really a *blackbox* model. I would have wanted to see some deeper discussion on this topic. I also had some concerns about the scalability of dense features and their bucketization in this approach. I*m not entirely convinced the method would work as well and efficiently in such feature spaces. I would have wanted to see some more empirical evidence and understanding on this topic. If possible it would have also been great to understand higher-order interactions and the ability of the model to find them empirically. - One thing I wasn*t entirely certain of was the amount of novelty in the work since it leverages existing works like NID (Tsang et al) and LIME (Ribeiro et al) for some of the key aspects of the method. It would be nice to see some discussion on this front as well. - I also would have liked to see significance testing being performed in general across the experiments. - Another smaller concern stemmed from a lack of mention of the stopping criterion of the model. In particular I*m wondering how models were stopped further training / best checkpoint was picked. This would be good to elaborate on to make sure that the models were all compared fairly from a computational perspective. - I also had some concerns with the scalability of the approach in large features spaces but am willing to give the authors the benefit of doubt on this one based on the 3 hour number they quoted from their experiments (which indicates something that is not prohibitively slow)", "role": "assistant" } ]
[ "This paper is focused on identifying/discovering feature interactions in blackbox models (with a focus on recommendations).", "Specifically the technique works by first corrupting datapoints and then using this *local dataset* to find interacting features via lasso-regularized multi-level perceptron approach (NID from Tsang et al).", "Using an expanded feature space and repeated calls to the above steps this is then used to find the most commonly occurring patterns.", "The empirical section is quite interesting, with some nice mix of qualitative and quantitative results.", "The quantitative results in particular are quite intriguing -- across recommendation and non-recommendation tasks.", "Overall the paper makes for an interesting read.", "On the whole I lean slightly positive -- largely due to the empirical section and the quantitative gains observed by adding the feature interactions as features to the model.", "That said I had quite a few concerns about the work: - In general the exposition was quite lacking in the methodological section.", "I had to re-read the paper a few times to be able to make out to fully understand the underlying methodology.", "I would make the overall pipeline very clear and explain out the MADEX pipeline very clearly.", "The feature expansion section (4.3) in particular was not very clear -- as to how it fit in with the rest of the pipeline and something that could do with more work.", "Nomenclature like source and target model are used somewhat arbitrarily and again can be clarified.", "- Another concern I had with the approach is how this would work if the underlying features itself were not known.", "Say for example you had a text understanding model -- the tokenization and vocabulary/OOV etc .. may all be components of the blackbox.", "Likewise for vision models, what features are being used may vary.", "By assuming (effectively perfect) information about the features used, this is no longer really a *blackbox* model.", "I would have wanted to see some deeper discussion on this topic.", "I also had some concerns about the scalability of dense features and their bucketization in this approach.", "I*m not entirely convinced the method would work as well and efficiently in such feature spaces.", "I would have wanted to see some more empirical evidence and understanding on this topic.", "If possible it would have also been great to understand higher-order interactions and the ability of the model to find them empirically.", "- One thing I wasn*t entirely certain of was the amount of novelty in the work since it leverages existing works like NID (Tsang et al) and LIME (Ribeiro et al) for some of the key aspects of the method.", "It would be nice to see some discussion on this front as well.", "- I also would have liked to see significance testing being performed in general across the experiments.", "- Another smaller concern stemmed from a lack of mention of the stopping criterion of the model.", "In particular I*m wondering how models were stopped further training / best checkpoint was picked.", "This would be good to elaborate on to make sure that the models were all compared fairly from a computational perspective.", "- I also had some concerns with the scalability of the approach in large features spaces but am willing to give the authors the benefit of doubt on this one based on the 3 hour number they quoted from their experiments (which indicates something that is not prohibitively slow)" ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 6, "materials_and_methods": 22, "praise": 3, "presentation_and_reporting": 7, "results_and_discussion": 7, "suggestion_and_solution": 10, "total": 28 }
0.25
0
0.214286
0.785714
0.107143
0.25
0.25
0.357143
28
1
0
1
1
1
1
1
1
2.214286
-1.336761
3.551047
Understanding the Limitations of Variational Mutual Information Estimators
Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.
This work summarizes the existing methods of mutual information estimation in a variational inference framework and describes the limitations in terms of bias-variance tradeoffs. Further, the authors care about the self-consistency, namely, independence, data processing, and additivity, which are properties of both entropy and differential entropy. Further, density ratio clipping is proposed to lessen a high-variance problem in estimating a partition function. In regard to the clipped density rations, it is not clear why both variance and bias become small when S is close to 1 and using small in Theorem 3 and 4. In Fig. 2 on the benchmark experiment, showed lower variance than . The comparison with MINE and BA is also expected on the benchmark experiment. As for the self-consistency tests on images, especially, for the ‘data-processing’ property, it would better to apply affine transformation such as rotation, translation, and scaling, rather than rows deletion. In Section 3.1, is not defined.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding the Limitations of Variational Mutual Information Estimators\nVariational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work summarizes the existing methods of mutual information estimation in a variational inference framework and describes the limitations in terms of bias-variance tradeoffs. Further, the authors care about the self-consistency, namely, independence, data processing, and additivity, which are properties of both entropy and differential entropy. Further, density ratio clipping is proposed to lessen a high-variance problem in estimating a partition function. In regard to the clipped density rations, it is not clear why both variance and bias become small when S is close to 1 and using small in Theorem 3 and 4. In Fig. 2 on the benchmark experiment, showed lower variance than . The comparison with MINE and BA is also expected on the benchmark experiment. As for the self-consistency tests on images, especially, for the ‘data-processing’ property, it would better to apply affine transformation such as rotation, translation, and scaling, rather than rows deletion. In Section 3.1, is not defined.", "role": "assistant" } ]
[ "This work summarizes the existing methods of mutual information estimation in a variational inference framework and describes the limitations in terms of bias-variance tradeoffs.", "Further, the authors care about the self-consistency, namely, independence, data processing, and additivity, which are properties of both entropy and differential entropy.", "Further, density ratio clipping is proposed to lessen a high-variance problem in estimating a partition function.", "In regard to the clipped density rations, it is not clear why both variance and bias become small when S is close to 1 and using small in Theorem 3 and 4.", "In Fig.", "2 on the benchmark experiment, showed lower variance than .", "The comparison with MINE and BA is also expected on the benchmark experiment.", "As for the self-consistency tests on images, especially, for the ‘data-processing’ property, it would better to apply affine transformation such as rotation, translation, and scaling, rather than rows deletion.", "In Section 3.1, is not defined." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 0, "materials_and_methods": 7, "praise": 0, "presentation_and_reporting": 3, "results_and_discussion": 6, "suggestion_and_solution": 1, "total": 9 }
0.222222
0.111111
0
0.777778
0
0.333333
0.666667
0.111111
9
1
1
0
1
0
1
1
1
2.222222
1.969703
0.252519
Understanding the Limitations of Variational Mutual Information Estimators
Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.
This paper relates most existing variational mutual information estimators to density ratio estimation and uses this to show high variance of certain categories of estimators; this motivates a new SMILE estimator that uses clipping to explicitly control the variance of the estimator (while potentially increasing bias). The paper also conducts some experiments on how well mutual information estimators satisfy some of the core properties of mutual information in practice. The core result of Theorem 2, although is proof is very simple once the problem has been phrased appropriately, is enlightening. The self-consistency tests are also sensible and important, although it would be nice to understand better which of these properties are truly important for various uses of mutual information in the literature. The proposed SMILE estimator is natural. But: - As , the variance bound of Theorem 4 naturally becomes infinite (as we*d expect). But we*d hope that the bias would become 0, when in fact Theorem 3 only shows in this case that the bias is at most S. Of course it should be true that ; can we fix Theorem 3 so that it recovers this fact? Then the biggest question not really addressed by this paper is: how do you set ? - If we know the (approximate) value of , then we can presumably trade off between the bias and variance terms of Theorem 3 and 4 to find the most effective value for . (Indeed, it might be nice to write a corollary showing that if you set to something depending on and some upper bound on , you achieve some rate in L2 risk.) - Is it possible to relate the value of to, say, the suboptimality in cross-entropy loss of the classifier used to obtain (when getting that way)? - Is it possible to get a probabilistic upper bound on , either with some other technique or perhaps a simple Hoeffding/Bernstein-type bound on the SMILE estimator with some initial value of , and then use that to choose the rate for ? - Is there some other practical scheme (even if it doesn*t admit bounds) to set to control the bias, given that it depends on ? Smaller points and minor suggestions: - In the paragraph following Theorem 2, it is first of all strange to say that *alternative procedures to obtaining the density ratio estimates will not alleviate the high variance issue* when indeed your proposal does just that. The last sentence of this paragraph clarifies the issue, but still makes sound *bad,* when in fact it*s a trade-off; this could be rephrased more clearly. - In Theorem 3: I think writing would be clearer than as you have it. - In the multivariate Gaussian experiments, Section 6.1: why do you effectively warm-start the estimator for higher mutual informations? Is it just so you can put things in one plot per method? I think it would be more natural to train each different problem from scratch; you can still use the same plot format if you like (with additional vertical lines between problems or something), but it*s not clear whether the behavior of the estimator is affected by this choice. - Why are the flow-based results not shown in Figure 2 / Table 2? - In Section 6.2 when setting up the three evaluation settings, it would be good to have a forward reference to Figure 3 (since these settings are perhaps easier to understand visually).
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding the Limitations of Variational Mutual Information Estimators\nVariational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper relates most existing variational mutual information estimators to density ratio estimation and uses this to show high variance of certain categories of estimators; this motivates a new SMILE estimator that uses clipping to explicitly control the variance of the estimator (while potentially increasing bias). The paper also conducts some experiments on how well mutual information estimators satisfy some of the core properties of mutual information in practice. The core result of Theorem 2, although is proof is very simple once the problem has been phrased appropriately, is enlightening. The self-consistency tests are also sensible and important, although it would be nice to understand better which of these properties are truly important for various uses of mutual information in the literature. The proposed SMILE estimator is natural. But: - As , the variance bound of Theorem 4 naturally becomes infinite (as we*d expect). But we*d hope that the bias would become 0, when in fact Theorem 3 only shows in this case that the bias is at most S. Of course it should be true that ; can we fix Theorem 3 so that it recovers this fact? Then the biggest question not really addressed by this paper is: how do you set ? - If we know the (approximate) value of , then we can presumably trade off between the bias and variance terms of Theorem 3 and 4 to find the most effective value for . (Indeed, it might be nice to write a corollary showing that if you set to something depending on and some upper bound on , you achieve some rate in L2 risk.) - Is it possible to relate the value of to, say, the suboptimality in cross-entropy loss of the classifier used to obtain (when getting that way)? - Is it possible to get a probabilistic upper bound on , either with some other technique or perhaps a simple Hoeffding/Bernstein-type bound on the SMILE estimator with some initial value of , and then use that to choose the rate for ? - Is there some other practical scheme (even if it doesn*t admit bounds) to set to control the bias, given that it depends on ? Smaller points and minor suggestions: - In the paragraph following Theorem 2, it is first of all strange to say that *alternative procedures to obtaining the density ratio estimates will not alleviate the high variance issue* when indeed your proposal does just that. The last sentence of this paragraph clarifies the issue, but still makes sound *bad,* when in fact it*s a trade-off; this could be rephrased more clearly. - In Theorem 3: I think writing would be clearer than as you have it. - In the multivariate Gaussian experiments, Section 6.1: why do you effectively warm-start the estimator for higher mutual informations? Is it just so you can put things in one plot per method? I think it would be more natural to train each different problem from scratch; you can still use the same plot format if you like (with additional vertical lines between problems or something), but it*s not clear whether the behavior of the estimator is affected by this choice. - Why are the flow-based results not shown in Figure 2 / Table 2? - In Section 6.2 when setting up the three evaluation settings, it would be good to have a forward reference to Figure 3 (since these settings are perhaps easier to understand visually).", "role": "assistant" } ]
[ "This paper relates most existing variational mutual information estimators to density ratio estimation and uses this to show high variance of certain categories of estimators; this motivates a new SMILE estimator that uses clipping to explicitly control the variance of the estimator (while potentially increasing bias).", "The paper also conducts some experiments on how well mutual information estimators satisfy some of the core properties of mutual information in practice.", "The core result of Theorem 2, although is proof is very simple once the problem has been phrased appropriately, is enlightening.", "The self-consistency tests are also sensible and important, although it would be nice to understand better which of these properties are truly important for various uses of mutual information in the literature.", "The proposed SMILE estimator is natural.", "But: - As , the variance bound of Theorem 4 naturally becomes infinite (as we*d expect).", "But we*d hope that the bias would become 0, when in fact Theorem 3 only shows in this case that the bias is at most S. Of course it should be true that ; can we fix Theorem 3 so that it recovers this fact?", "Then the biggest question not really addressed by this paper is: how do you set ?", "- If we know the (approximate) value of , then we can presumably trade off between the bias and variance terms of Theorem 3 and 4 to find the most effective value for .", "(Indeed, it might be nice to write a corollary showing that if you set to something depending on and some upper bound on , you achieve some rate in L2 risk.)", "- Is it possible to relate the value of to, say, the suboptimality in cross-entropy loss of the classifier used to obtain (when getting that way)?", "- Is it possible to get a probabilistic upper bound on , either with some other technique or perhaps a simple Hoeffding/Bernstein-type bound on the SMILE estimator with some initial value of , and then use that to choose the rate for ?", "- Is there some other practical scheme (even if it doesn*t admit bounds) to set to control the bias, given that it depends on ?", "Smaller points and minor suggestions: - In the paragraph following Theorem 2, it is first of all strange to say that *alternative procedures to obtaining the density ratio estimates will not alleviate the high variance issue* when indeed your proposal does just that.", "The last sentence of this paragraph clarifies the issue, but still makes sound *bad,* when in fact it*s a trade-off; this could be rephrased more clearly.", "- In Theorem 3: I think writing would be clearer than as you have it.", "- In the multivariate Gaussian experiments, Section 6.1: why do you effectively warm-start the estimator for higher mutual informations?", "Is it just so you can put things in one plot per method?", "I think it would be more natural to train each different problem from scratch; you can still use the same plot format if you like (with additional vertical lines between problems or something), but it*s not clear whether the behavior of the estimator is affected by this choice.", "- Why are the flow-based results not shown in Figure 2 / Table 2?", "- In Section 6.2 when setting up the three evaluation settings, it would be good to have a forward reference to Figure 3 (since these settings are perhaps easier to understand visually)." ]
{ "criticism": 2, "example": 3, "importance_and_relevance": 2, "materials_and_methods": 11, "praise": 1, "presentation_and_reporting": 8, "results_and_discussion": 12, "suggestion_and_solution": 8, "total": 21 }
0.095238
0.142857
0.095238
0.52381
0.047619
0.380952
0.571429
0.380952
21
1
1
1
1
1
1
1
1
2.238095
1.22802
1.010076
Language GANs Falling Short
Traditional natural language generation (NLG) models are trained using maximum likelihood estimation (MLE) which differs from the sample generation inference procedure. During training the ground truth tokens are passed to the model, however, during inference, the model instead reads its previously generated samples - a phenomenon coined exposure bias. Exposure bias was hypothesized to be a root cause of poor sample quality and thus many generative adversarial networks (GANs) were proposed as a remedy since they have identical training and inference. However, many of the ensuing GAN variants validated sample quality improvements but ignored loss of sample diversity. This work reiterates the fallacy of quality-only metrics and clearly demonstrate that the well-established technique of reducing softmax temperature can outperform GANs on a quality-only metric. Further, we establish a definitive quality-diversity evaluation procedure using temperature tuning over local and global sample metrics. Under this, we find that MLE models consistently outperform the proposed GAN variants over the whole quality-diversity space. Specifically, we find that 1) exposure bias appears to be less of an issue than the complications arising from non-differentiable, sequential GAN training; 2) MLE trained models provide a better quality/diversity trade-off compared to their GAN counterparts, all while being easier to train, easier to cross-validate, and less computationally expensive.
Recently many language GAN papers have been published to overcome the so called exposure bias, and demonstrated improvements in natural language generation in terms of sample quality, some works propose to assess the generation in terms of diversity, however, quality and diversity are two conflicting measures that are hard to meet. This paper is a groundbreaking work that proposes receiver operating curve or Pareto optimality for quality and diversity measures, and shows that simple temperature sweeping in MLE generates the best quality-diversity curves than all language GAN models through comprehensive experiments. It points out a good target that language GANs should aims at. For the experiments on long-text generation using EMNLP news 2017, it is not clear how the data is partitioned as training data, validation data and test data to get the results in Figures 4(a), moreover for the results for LeakGAN, RankGAN SeqGAN, MaliGAN, it seems that they are copied from other papers, but again how the data set is partitioned is not clear for example in SeqGAN*s paper, and most likely, the data is partitioned in a different way, so the results are not comparable. The authors should run the code and get the results on it own. An important RL-free language GAN paper is missing, Zhongliang Li, Tian Xia, Xingyu Lou, Kaihe Xu, Shaojun Wang, Jing Xiao: Adversarial Discrete Sequence Generation without Explicit NeuralNetworks as Discriminators. AISTATS 2019: 3089-3098. This paper directly and adversarially train a language model without MLE pre-training and obtains good results, it is better to compare the results. Typo: page 1, the second line from bottom, as a computationally, not an
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLanguage GANs Falling Short\nTraditional natural language generation (NLG) models are trained using maximum likelihood estimation (MLE) which differs from the sample generation inference procedure. During training the ground truth tokens are passed to the model, however, during inference, the model instead reads its previously generated samples - a phenomenon coined exposure bias. Exposure bias was hypothesized to be a root cause of poor sample quality and thus many generative adversarial networks (GANs) were proposed as a remedy since they have identical training and inference. However, many of the ensuing GAN variants validated sample quality improvements but ignored loss of sample diversity. This work reiterates the fallacy of quality-only metrics and clearly demonstrate that the well-established technique of reducing softmax temperature can outperform GANs on a quality-only metric. Further, we establish a definitive quality-diversity evaluation procedure using temperature tuning over local and global sample metrics. Under this, we find that MLE models consistently outperform the proposed GAN variants over the whole quality-diversity space. Specifically, we find that 1) exposure bias appears to be less of an issue than the complications arising from non-differentiable, sequential GAN training; 2) MLE trained models provide a better quality/diversity trade-off compared to their GAN counterparts, all while being easier to train, easier to cross-validate, and less computationally expensive.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nRecently many language GAN papers have been published to overcome the so called exposure bias, and demonstrated improvements in natural language generation in terms of sample quality, some works propose to assess the generation in terms of diversity, however, quality and diversity are two conflicting measures that are hard to meet. This paper is a groundbreaking work that proposes receiver operating curve or Pareto optimality for quality and diversity measures, and shows that simple temperature sweeping in MLE generates the best quality-diversity curves than all language GAN models through comprehensive experiments. It points out a good target that language GANs should aims at. For the experiments on long-text generation using EMNLP news 2017, it is not clear how the data is partitioned as training data, validation data and test data to get the results in Figures 4(a), moreover for the results for LeakGAN, RankGAN SeqGAN, MaliGAN, it seems that they are copied from other papers, but again how the data set is partitioned is not clear for example in SeqGAN*s paper, and most likely, the data is partitioned in a different way, so the results are not comparable. The authors should run the code and get the results on it own. An important RL-free language GAN paper is missing, Zhongliang Li, Tian Xia, Xingyu Lou, Kaihe Xu, Shaojun Wang, Jing Xiao: Adversarial Discrete Sequence Generation without Explicit NeuralNetworks as Discriminators. AISTATS 2019: 3089-3098. This paper directly and adversarially train a language model without MLE pre-training and obtains good results, it is better to compare the results. Typo: page 1, the second line from bottom, as a computationally, not an", "role": "assistant" } ]
[ "Recently many language GAN papers have been published to overcome the so called exposure bias, and demonstrated improvements in natural language generation in terms of sample quality, some works propose to assess the generation in terms of diversity, however, quality and diversity are two conflicting measures that are hard to meet.", "This paper is a groundbreaking work that proposes receiver operating curve or Pareto optimality for quality and diversity measures, and shows that simple temperature sweeping in MLE generates the best quality-diversity curves than all language GAN models through comprehensive experiments.", "It points out a good target that language GANs should aims at.", "For the experiments on long-text generation using EMNLP news 2017, it is not clear how the data is partitioned as training data, validation data and test data to get the results in Figures 4(a), moreover for the results for LeakGAN, RankGAN SeqGAN, MaliGAN, it seems that they are copied from other papers, but again how the data set is partitioned is not clear for example in SeqGAN*s paper, and most likely, the data is partitioned in a different way, so the results are not comparable.", "The authors should run the code and get the results on it own.", "An important RL-free language GAN paper is missing, Zhongliang Li, Tian Xia, Xingyu Lou, Kaihe Xu, Shaojun Wang, Jing Xiao: Adversarial Discrete Sequence Generation without Explicit NeuralNetworks as Discriminators.", "AISTATS 2019: 3089-3098.", "This paper directly and adversarially train a language model without MLE pre-training and obtains good results, it is better to compare the results.", "Typo: page 1, the second line from bottom, as a computationally, not an" ]
{ "criticism": 3, "example": 2, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 3, "presentation_and_reporting": 4, "results_and_discussion": 3, "suggestion_and_solution": 2, "total": 9 }
0.333333
0.222222
0.222222
0.444444
0.333333
0.444444
0.333333
0.222222
9
1
1
1
1
1
1
1
1
2.555556
2.303037
0.252519
Stochastic Conditional Generative Networks with Basis Decomposition
While generative adversarial networks (GANs) have revolutionized machine learning, a number of open questions remain to fully understand them and exploit their power. One of these questions is how to efficiently achieve proper diversity and sampling of the multi-mode data space. To address this, we introduce BasisGAN, a stochastic conditional multi-mode image generator. By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of basis elements, we learn a plug-and-played basis generator to stochastically generate basis elements, with just a few hundred of parameters, to fully embed stochasticity into convolutional filters. By sampling basis elements instead of filters, we dramatically reduce the cost of modeling the parameter space with no sacrifice on either image diversity or fidelity. To illustrate this proposed plug-and-play framework, we construct variants of BasisGAN based on state-of-the-art conditional image generation networks, and train the networks by simply plugging in a basis generator, without additional auxiliary components, hyperparameters, or training objectives. The experimental success is complemented with theoretical results indicating how the perturbations introduced by the proposed sampling of basis elements can propagate to the appearance of generated images.
The paper proposes a model for stochasticity for conditional image generation, building upon the previously available (DCFNet) results on composition of convolutional filters out of the elements of the filter basis. The idea of introducing stochasticity by convolutional filters into the conditional generative models seems to be novel and the reviewer thinks it could be of interest for the community. The following remarks could be given to improve the presentation: 1) Theorem 1 is an existence theorem, so it does not give the procedure for construction of the basis. Does the construction procedure for the basis, described under the theorem formulation, meet the conditions of Theorem 1? 2) The Theorem 1 formulation states that “ If there exists a set of deterministic linear transforms”. Should the linear independence be stated as well as one of the theorem conditions ( so that the space dimensionality would indeed be K)? 3) The reviewer finds the structure of Section 4 confusing: it starts from the problem statement (first paragraph *Using the method above, filters of each stochastic layer…’), then provides the description of the approach and only then outlines Theorem 1. It might be that stating Theorem 1 and then defining the method for generation of the basis (how exactly could we get to the basis? ) could improve readability of the paper. Essentially, the question is: is there any way to emphasise the procedure for filter generation and inform the reader in which circumstances these filters would be the basis (e.g. why it wouldn*t be prone to the analogue of mode collapse when the filters do not effectively have enough diversity for linear independence)? *** In addition to this list, it might be useful to provide some evidence on whether there is any inherent mechanism to regulate the diversity of filters and therefore of samples (so that to change the variability of the conditional samples from the model with the impact analogous to the one of temperature in Glow (Kingma et al, 2018)). If there is one, further experimental evidence, which shows the impact on diversity of filters, would contribute to improvement of the paper.
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nStochastic Conditional Generative Networks with Basis Decomposition\nWhile generative adversarial networks (GANs) have revolutionized machine learning, a number of open questions remain to fully understand them and exploit their power. One of these questions is how to efficiently achieve proper diversity and sampling of the multi-mode data space. To address this, we introduce BasisGAN, a stochastic conditional multi-mode image generator. By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of basis elements, we learn a plug-and-played basis generator to stochastically generate basis elements, with just a few hundred of parameters, to fully embed stochasticity into convolutional filters. By sampling basis elements instead of filters, we dramatically reduce the cost of modeling the parameter space with no sacrifice on either image diversity or fidelity. To illustrate this proposed plug-and-play framework, we construct variants of BasisGAN based on state-of-the-art conditional image generation networks, and train the networks by simply plugging in a basis generator, without additional auxiliary components, hyperparameters, or training objectives. The experimental success is complemented with theoretical results indicating how the perturbations introduced by the proposed sampling of basis elements can propagate to the appearance of generated images.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a model for stochasticity for conditional image generation, building upon the previously available (DCFNet) results on composition of convolutional filters out of the elements of the filter basis. The idea of introducing stochasticity by convolutional filters into the conditional generative models seems to be novel and the reviewer thinks it could be of interest for the community. The following remarks could be given to improve the presentation: 1) Theorem 1 is an existence theorem, so it does not give the procedure for construction of the basis. Does the construction procedure for the basis, described under the theorem formulation, meet the conditions of Theorem 1? 2) The Theorem 1 formulation states that “ If there exists a set of deterministic linear transforms”. Should the linear independence be stated as well as one of the theorem conditions ( so that the space dimensionality would indeed be K)? 3) The reviewer finds the structure of Section 4 confusing: it starts from the problem statement (first paragraph *Using the method above, filters of each stochastic layer…’), then provides the description of the approach and only then outlines Theorem 1. It might be that stating Theorem 1 and then defining the method for generation of the basis (how exactly could we get to the basis? ) could improve readability of the paper. Essentially, the question is: is there any way to emphasise the procedure for filter generation and inform the reader in which circumstances these filters would be the basis (e.g. why it wouldn*t be prone to the analogue of mode collapse when the filters do not effectively have enough diversity for linear independence)? *** In addition to this list, it might be useful to provide some evidence on whether there is any inherent mechanism to regulate the diversity of filters and therefore of samples (so that to change the variability of the conditional samples from the model with the impact analogous to the one of temperature in Glow (Kingma et al, 2018)). If there is one, further experimental evidence, which shows the impact on diversity of filters, would contribute to improvement of the paper.", "role": "assistant" } ]
[ "The paper proposes a model for stochasticity for conditional image generation, building upon the previously available (DCFNet) results on composition of convolutional filters out of the elements of the filter basis.", "The idea of introducing stochasticity by convolutional filters into the conditional generative models seems to be novel and the reviewer thinks it could be of interest for the community.", "The following remarks could be given to improve the presentation: 1) Theorem 1 is an existence theorem, so it does not give the procedure for construction of the basis.", "Does the construction procedure for the basis, described under the theorem formulation, meet the conditions of Theorem 1?", "2) The Theorem 1 formulation states that “ If there exists a set of deterministic linear transforms”.", "Should the linear independence be stated as well as one of the theorem conditions ( so that the space dimensionality would indeed be K)?", "3) The reviewer finds the structure of Section 4 confusing: it starts from the problem statement (first paragraph *Using the method above, filters of each stochastic layer…’), then provides the description of the approach and only then outlines Theorem 1.", "It might be that stating Theorem 1 and then defining the method for generation of the basis (how exactly could we get to the basis? )", "could improve readability of the paper.", "Essentially, the question is: is there any way to emphasise the procedure for filter generation and inform the reader in which circumstances these filters would be the basis (e.g.", "why it wouldn*t be prone to the analogue of mode collapse when the filters do not effectively have enough diversity for linear independence)?", "*** In addition to this list, it might be useful to provide some evidence on whether there is any inherent mechanism to regulate the diversity of filters and therefore of samples (so that to change the variability of the conditional samples from the model with the impact analogous to the one of temperature in Glow (Kingma et al, 2018)).", "If there is one, further experimental evidence, which shows the impact on diversity of filters, would contribute to improvement of the paper." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 9, "praise": 1, "presentation_and_reporting": 6, "results_and_discussion": 5, "suggestion_and_solution": 6, "total": 13 }
0.153846
0
0.153846
0.692308
0.076923
0.461538
0.384615
0.461538
13
1
0
1
1
1
1
1
1
2.384615
2.384615
0
Stochastic Conditional Generative Networks with Basis Decomposition
While generative adversarial networks (GANs) have revolutionized machine learning, a number of open questions remain to fully understand them and exploit their power. One of these questions is how to efficiently achieve proper diversity and sampling of the multi-mode data space. To address this, we introduce BasisGAN, a stochastic conditional multi-mode image generator. By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of basis elements, we learn a plug-and-played basis generator to stochastically generate basis elements, with just a few hundred of parameters, to fully embed stochasticity into convolutional filters. By sampling basis elements instead of filters, we dramatically reduce the cost of modeling the parameter space with no sacrifice on either image diversity or fidelity. To illustrate this proposed plug-and-play framework, we construct variants of BasisGAN based on state-of-the-art conditional image generation networks, and train the networks by simply plugging in a basis generator, without additional auxiliary components, hyperparameters, or training objectives. The experimental success is complemented with theoretical results indicating how the perturbations introduced by the proposed sampling of basis elements can propagate to the appearance of generated images.
In this paper, the authors introduce BasisGAN, a novel method for introducing stochasticity in conditional GANs, i.e., a way of conducting one-to-many mappings. This is a good addition in the literature as: (a) most of the widely-used conditional GANs such as pix2pix (Isola et al., 2016) or pix2pixHD (Wang et al., 2018) are deterministic (i.e., for a specific input a single output is always generated), (b) it improves upon the current SOTA in one-to-many mappings, (c) it is very useful application-wise. As also stated in the paper, there is a number of applications where this method is handy (e.g., converting a sketch to images varying in colors, etc.). I am leaning towards accepting this paper as this work is well-motivated and found the idea of using the basis generator to learn the bases for the generation of the parameters quite interesting. This is the main contribution and difference of this paper in comparison to DCFNet (Qiu et al., ICML 2018), where the bases are not learned. Nevertheless, I have the following questions/requests: - How can we tell that the generated bases are indeed bases (e.g., are they orthogonal?) - Please report the number of parameters used in your implementation in comparison to the rest of the methods. - Please provide qualitative results against the compared methods and especially against DSGAN (Qin et al., 2018).
iclr2020
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nStochastic Conditional Generative Networks with Basis Decomposition\nWhile generative adversarial networks (GANs) have revolutionized machine learning, a number of open questions remain to fully understand them and exploit their power. One of these questions is how to efficiently achieve proper diversity and sampling of the multi-mode data space. To address this, we introduce BasisGAN, a stochastic conditional multi-mode image generator. By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of basis elements, we learn a plug-and-played basis generator to stochastically generate basis elements, with just a few hundred of parameters, to fully embed stochasticity into convolutional filters. By sampling basis elements instead of filters, we dramatically reduce the cost of modeling the parameter space with no sacrifice on either image diversity or fidelity. To illustrate this proposed plug-and-play framework, we construct variants of BasisGAN based on state-of-the-art conditional image generation networks, and train the networks by simply plugging in a basis generator, without additional auxiliary components, hyperparameters, or training objectives. The experimental success is complemented with theoretical results indicating how the perturbations introduced by the proposed sampling of basis elements can propagate to the appearance of generated images.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nIn this paper, the authors introduce BasisGAN, a novel method for introducing stochasticity in conditional GANs, i.e., a way of conducting one-to-many mappings. This is a good addition in the literature as: (a) most of the widely-used conditional GANs such as pix2pix (Isola et al., 2016) or pix2pixHD (Wang et al., 2018) are deterministic (i.e., for a specific input a single output is always generated), (b) it improves upon the current SOTA in one-to-many mappings, (c) it is very useful application-wise. As also stated in the paper, there is a number of applications where this method is handy (e.g., converting a sketch to images varying in colors, etc.). I am leaning towards accepting this paper as this work is well-motivated and found the idea of using the basis generator to learn the bases for the generation of the parameters quite interesting. This is the main contribution and difference of this paper in comparison to DCFNet (Qiu et al., ICML 2018), where the bases are not learned. Nevertheless, I have the following questions/requests: - How can we tell that the generated bases are indeed bases (e.g., are they orthogonal?) - Please report the number of parameters used in your implementation in comparison to the rest of the methods. - Please provide qualitative results against the compared methods and especially against DSGAN (Qin et al., 2018).", "role": "assistant" } ]
[ "In this paper, the authors introduce BasisGAN, a novel method for introducing stochasticity in conditional GANs, i.e., a way of conducting one-to-many mappings.", "This is a good addition in the literature as: (a) most of the widely-used conditional GANs such as pix2pix (Isola et al., 2016) or pix2pixHD (Wang et al., 2018) are deterministic (i.e., for a specific input a single output is always generated), (b) it improves upon the current SOTA in one-to-many mappings, (c) it is very useful application-wise.", "As also stated in the paper, there is a number of applications where this method is handy (e.g., converting a sketch to images varying in colors, etc.).", "I am leaning towards accepting this paper as this work is well-motivated and found the idea of using the basis generator to learn the bases for the generation of the parameters quite interesting.", "This is the main contribution and difference of this paper in comparison to DCFNet (Qiu et al., ICML 2018), where the bases are not learned.", "Nevertheless, I have the following questions/requests: - How can we tell that the generated bases are indeed bases (e.g., are they orthogonal?)", "- Please report the number of parameters used in your implementation in comparison to the rest of the methods.", "- Please provide qualitative results against the compared methods and especially against DSGAN (Qin et al., 2018)." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 8, "praise": 2, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 8 }
0.125
0
0.375
1
0.25
0.25
0.125
0.25
8
1
0
1
1
1
1
1
1
2.375
1.980439
0.394561