title
stringlengths 2
153
| abstract
stringlengths 0
2.15k
| review
stringlengths 97
8.43k
| source
stringclasses 4
values | review_src
stringclasses 1
value | thinking_trace
stringclasses 1
value | conversations
listlengths 2
2
| review_sentences
sequencelengths 1
61
| criteria_count
dict | criticism
float64 0
1
| example
float64 0
1
| importance_and_relevance
float64 0
1
| materials_and_methods
float64 0
1
| praise
float64 0
1
| presentation_and_reporting
float64 0
1
| results_and_discussion
float64 0
1
| suggestion_and_solution
float64 0
1
| total
int64 1
61
| has_criticism
int64 0
1
| has_example
int64 0
1
| has_importance_and_relevance
int64 0
1
| has_materials_and_methods
int64 0
1
| has_praise
int64 0
1
| has_presentation_and_reporting
int64 0
1
| has_results_and_discussion
int64 0
1
| has_suggestion_and_solution
int64 0
1
| reward_value
float64 2.13
6
| reward_value_length_adjusted
float64 -34.13
3.73
| length_penalty
float64 0
36.4
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Data-dependent PAC-Bayes priors via differential privacy | We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier. Entropy-SGD works by optimizing the bound’s prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data. Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior. In order to obtain a valid generalization bound, we show that an ?-differentially private prior yields a valid PAC-Bayes bound, a straightforward consequence of results connecting generalization with differential privacy. Using stochastic gradient Langevin dynamics (SGLD) to approximate the well-known exponential release mechanism, we observe that generalization error on MNIST (measured on held out data) falls within the (empirically nonvacuous) bounds computed under the assumption that SGLD produces perfect samples. In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance. | 1) I would like to ask for the clarification regarding the generalization guarantees. The original Entropy-SGD paper shows improved generalization over SGD using uniform stability, however the analysis of the authors rely on an unrealistic assumption regarding the eigenvalues of the Hessian (they are assumed to be away from zero, which is not true at least at local minima of interest). What is the enabling technique in this submission that avoids taking this assumption? (to clarify: the analysis is all-together different in both papers, however this aspect of the analysis is not fully clear to me). 2) It is unclear to me what are the unrealistic assumptions made in the paper. Please, list them all in one place in the paper and discuss in details. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEntropy-SGD optimizes the prior of a PAC-Bayes bound: Data-dependent PAC-Bayes priors via differential privacy\nWe show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier. Entropy-SGD works by optimizing the bound’s prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data. Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior. In order to obtain a valid generalization bound, we show that an ?-differentially private prior yields a valid PAC-Bayes bound, a straightforward consequence of results connecting generalization with differential privacy. Using stochastic gradient Langevin dynamics (SGLD) to approximate the well-known exponential release mechanism, we observe that generalization error on MNIST (measured on held out data) falls within the (empirically nonvacuous) bounds computed under the assumption that SGLD produces perfect samples. In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n 1) I would like to ask for the clarification regarding the generalization guarantees. The original Entropy-SGD paper shows improved generalization over SGD using uniform stability, however the analysis of the authors rely on an unrealistic assumption regarding the eigenvalues of the Hessian (they are assumed to be away from zero, which is not true at least at local minima of interest). What is the enabling technique in this submission that avoids taking this assumption? (to clarify: the analysis is all-together different in both papers, however this aspect of the analysis is not fully clear to me). 2) It is unclear to me what are the unrealistic assumptions made in the paper. Please, list them all in one place in the paper and discuss in details.",
"role": "assistant"
}
]
| [
" 1) I would like to ask for the clarification regarding the generalization guarantees.",
"The original Entropy-SGD paper shows improved generalization over SGD using uniform stability, however the analysis of the authors rely on an unrealistic assumption regarding the eigenvalues of the Hessian (they are assumed to be away from zero, which is not true at least at local minima of interest).",
"What is the enabling technique in this submission that avoids taking this assumption?",
"(to clarify: the analysis is all-together different in both papers, however this aspect of the analysis is not fully clear to me).",
"2) It is unclear to me what are the unrealistic assumptions made in the paper.",
"Please, list them all in one place in the paper and discuss in details."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 6
} | 0.5 | 0 | 0 | 0.666667 | 0.166667 | 0.333333 | 0.166667 | 0.333333 | 6 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.166667 | 1.393328 | 0.773339 |
Non-Autoregressive Neural Machine Translation | Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English–Romanian. | This paper describes an approach to decode non-autoregressively for neural machine translation (and other tasks that can be solved via seq2seq models). The advantage is the possibility of more parallel decoding which can result in a significant speed-up (up to a factor of 16 in the experiments described). The disadvantage is that it is more complicated than a standard beam search as auto-regressive teacher models are needed for training and the results do not reach (yet) the same BLEU scores as standard beam search. Overall, this is an interesting paper. It would have been good to see a speed-accuracy curve which plots decoding speed for different sized models versus the achieved BLUE score on one of the standard benchmarks (like WMT14 en-fr or en-de) to understand better the pros and cons of the proposed approach and to be able to compare models at the same speed or the same BLEU scores. Table 1 gives a hint of that but it is not clear whether much smaller models with standard beam search are possibly as good and fast as NAT -- losing 2-5 BLEU points on WMT14 is significant. While the Ro->En results are good, this particular language pair has not been used much by others; it would have been more interesting to stay with a single well-used language pair and benchmark and analyze why WMT14 en->de and de->en are not improving more. Finally it would have been good to address total computation in the comparison as well -- it seems while total decoding time is smaller total computation for NAT + NPD is actually higher depending on the choice of s. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNon-Autoregressive Neural Machine Translation\nExisting approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English–Romanian.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper describes an approach to decode non-autoregressively for neural machine translation (and other tasks that can be solved via seq2seq models). The advantage is the possibility of more parallel decoding which can result in a significant speed-up (up to a factor of 16 in the experiments described). The disadvantage is that it is more complicated than a standard beam search as auto-regressive teacher models are needed for training and the results do not reach (yet) the same BLEU scores as standard beam search. Overall, this is an interesting paper. It would have been good to see a speed-accuracy curve which plots decoding speed for different sized models versus the achieved BLUE score on one of the standard benchmarks (like WMT14 en-fr or en-de) to understand better the pros and cons of the proposed approach and to be able to compare models at the same speed or the same BLEU scores. Table 1 gives a hint of that but it is not clear whether much smaller models with standard beam search are possibly as good and fast as NAT -- losing 2-5 BLEU points on WMT14 is significant. While the Ro->En results are good, this particular language pair has not been used much by others; it would have been more interesting to stay with a single well-used language pair and benchmark and analyze why WMT14 en->de and de->en are not improving more. Finally it would have been good to address total computation in the comparison as well -- it seems while total decoding time is smaller total computation for NAT + NPD is actually higher depending on the choice of s.",
"role": "assistant"
}
]
| [
" This paper describes an approach to decode non-autoregressively for neural machine translation (and other tasks that can be solved via seq2seq models).",
"The advantage is the possibility of more parallel decoding which can result in a significant speed-up (up to a factor of 16 in the experiments described).",
"The disadvantage is that it is more complicated than a standard beam search as auto-regressive teacher models are needed for training and the results do not reach (yet) the same BLEU scores as standard beam search.",
"Overall, this is an interesting paper.",
"It would have been good to see a speed-accuracy curve which plots decoding speed for different sized models versus the achieved BLUE score on one of the standard benchmarks (like WMT14 en-fr or en-de) to understand better the pros and cons of the proposed approach and to be able to compare models at the same speed or the same BLEU scores.",
"Table 1 gives a hint of that but it is not clear whether much smaller models with standard beam search are possibly as good and fast as NAT -- losing 2-5 BLEU points on WMT14 is significant.",
"While the Ro->En results are good, this particular language pair has not been used much by others; it would have been more interesting to stay with a single well-used language pair and benchmark and analyze why WMT14 en->de and de->en are not improving more.",
"Finally it would have been good to address total computation in the comparison as well -- it seems while total decoding time is smaller total computation for NAT + NPD is actually higher depending on the choice of s."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 8
} | 0.375 | 0 | 0.125 | 0.875 | 0.25 | 0.125 | 0.375 | 0.375 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.5 | 2.105439 | 0.394561 |
Memory Architectures in Recurrent Neural Network Language Models | We compare and analyze sequential, random access, and stack memory architectures for recurrent neural network language models. Our experiments on the Penn Treebank and Wikitext-2 datasets show that stack-based memory architectures consistently achieve the best performance in terms of held out perplexity. We also propose a generalization to existing continuous stack models (Joulin & Mikolov,2015; Grefenstette et al., 2015) to allow a variable number of pop operations more naturally that further improves performance. We further evaluate these language models in terms of their ability to capture non-local syntactic dependencies on a subject-verb agreement dataset (Linzen et al., 2016) and establish new state of the art results using memory augmented language models. Our results demonstrate the value of stack-structured memory for explaining the distribution of words in natural language, in line with linguistic theories claiming a context-free backbone for natural language. | The main contribution of this paper are: (a) a proposed extension to continuous stack model to allow multiple pop operation, (b) on a language model task, they demonstrate that their model gives better perplexity than comparable LSTM and attention model, and (c) on a syntactic task (non-local subject-verb agreement), again, they demonstrate better performance than comparable LSTM and attention model. Additionally, the paper provides a nice introduction to the topic and casts the current models into three categories -- the sequential memory access, the random memory access and the stack memory access models. Their analysis in section (3.4) using the Venn diagram and illustrative figures in (3), (4) and (5) provide useful insight into the performance of the model. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMemory Architectures in Recurrent Neural Network Language Models\nWe compare and analyze sequential, random access, and stack memory architectures for recurrent neural network language models. Our experiments on the Penn Treebank and Wikitext-2 datasets show that stack-based memory architectures consistently achieve the best performance in terms of held out perplexity. We also propose a generalization to existing continuous stack models (Joulin & Mikolov,2015; Grefenstette et al., 2015) to allow a variable number of pop operations more naturally that further improves performance. We further evaluate these language models in terms of their ability to capture non-local syntactic dependencies on a subject-verb agreement dataset (Linzen et al., 2016) and establish new state of the art results using memory augmented language models. Our results demonstrate the value of stack-structured memory for explaining the distribution of words in natural language, in line with linguistic theories claiming a context-free backbone for natural language.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The main contribution of this paper are: (a) a proposed extension to continuous stack model to allow multiple pop operation, (b) on a language model task, they demonstrate that their model gives better perplexity than comparable LSTM and attention model, and (c) on a syntactic task (non-local subject-verb agreement), again, they demonstrate better performance than comparable LSTM and attention model. Additionally, the paper provides a nice introduction to the topic and casts the current models into three categories -- the sequential memory access, the random memory access and the stack memory access models. Their analysis in section (3.4) using the Venn diagram and illustrative figures in (3), (4) and (5) provide useful insight into the performance of the model.",
"role": "assistant"
}
]
| [
" The main contribution of this paper are: (a) a proposed extension to continuous stack model to allow multiple pop operation, (b) on a language model task, they demonstrate that their model gives better perplexity than comparable LSTM and attention model, and (c) on a syntactic task (non-local subject-verb agreement), again, they demonstrate better performance than comparable LSTM and attention model.",
"Additionally, the paper provides a nice introduction to the topic and casts the current models into three categories -- the sequential memory access, the random memory access and the stack memory access models.",
"Their analysis in section (3.4) using the Venn diagram and illustrative figures in (3), (4) and (5) provide useful insight into the performance of the model."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0 | 0 | 0.666667 | 1 | 0.666667 | 0.333333 | 0.333333 | 0.333333 | 3 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3.333333 | 1.75509 | 1.578243 |
TD Learning with Constrained Gradients | Temporal Difference Learning with function approximation is known to be unstable. Previous work like citet{sutton2009fast} and citet{sutton2009convergent} has presented alternative objectives that are stable to minimize. However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly citep{mnih2015human}. In this work we propose a constraint on the TD update that minimizes change to the target values. This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation. We validate this update by applying our technique to deep Q-learning, and training without a target network. We also show that adding this constraint on Baird*s counterexample keeps Q-learning from diverging. | This is an interesting idea, and written clearly. The experiments with Baird*s and CartPole were both convincing as preliminary evidence that this could be effective. However, it is very hard to generalize from these toy problems. First, we really need a more thorough analysis of what this does to the learning dynamics itself. Baring theoretical results, you could analyze the changes to the value function at the current and next state with and without the constraint to illustrate the effects more directly. I think ideally, I would want to see this on Atari or some of the continuous control domains often used. If this allows the removing of the target network for instance, in those more difficult tasks, then this would be a huge deal. Additionally, I do not think the current gridworld task adds anything to the experiments, I would rather actually see this on a more interesting linear function approximation on some other simple task like Mountain Car than a neural network on gridworld. The reason this might be interesting is that when the parameter space is lower dimensional (not an issue for neural nets, but could be problematic for linear FA) the constraint might be too much leading to significantly poorer performance. I suspect this is the actual cause for it not converging to zero for Baird*s, although please correct me if I*m wrong on that. As is, I cannot recommend acceptance given the current experiments and lack of theoretical results. But I do think this is a very interesting direction and hope to see more thorough experiments or analysis to support it. Pros: Simple, interesting idea Works well on toy problems, and able to prevent divergence in Baird*s counter-example Cons: Lacking in theoretical analysis or significant experimental results | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTD Learning with Constrained Gradients\nTemporal Difference Learning with function approximation is known to be unstable. Previous work like citet{sutton2009fast} and citet{sutton2009convergent} has presented alternative objectives that are stable to minimize. However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly citep{mnih2015human}. In this work we propose a constraint on the TD update that minimizes change to the target values. This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation. We validate this update by applying our technique to deep Q-learning, and training without a target network. We also show that adding this constraint on Baird*s counterexample keeps Q-learning from diverging.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This is an interesting idea, and written clearly. The experiments with Baird*s and CartPole were both convincing as preliminary evidence that this could be effective. However, it is very hard to generalize from these toy problems. First, we really need a more thorough analysis of what this does to the learning dynamics itself. Baring theoretical results, you could analyze the changes to the value function at the current and next state with and without the constraint to illustrate the effects more directly. I think ideally, I would want to see this on Atari or some of the continuous control domains often used. If this allows the removing of the target network for instance, in those more difficult tasks, then this would be a huge deal. Additionally, I do not think the current gridworld task adds anything to the experiments, I would rather actually see this on a more interesting linear function approximation on some other simple task like Mountain Car than a neural network on gridworld. The reason this might be interesting is that when the parameter space is lower dimensional (not an issue for neural nets, but could be problematic for linear FA) the constraint might be too much leading to significantly poorer performance. I suspect this is the actual cause for it not converging to zero for Baird*s, although please correct me if I*m wrong on that. As is, I cannot recommend acceptance given the current experiments and lack of theoretical results. But I do think this is a very interesting direction and hope to see more thorough experiments or analysis to support it. Pros: Simple, interesting idea Works well on toy problems, and able to prevent divergence in Baird*s counter-example Cons: Lacking in theoretical analysis or significant experimental results",
"role": "assistant"
}
]
| [
" This is an interesting idea, and written clearly.",
"The experiments with Baird*s and CartPole were both convincing as preliminary evidence that this could be effective.",
"However, it is very hard to generalize from these toy problems.",
"First, we really need a more thorough analysis of what this does to the learning dynamics itself.",
"Baring theoretical results, you could analyze the changes to the value function at the current and next state with and without the constraint to illustrate the effects more directly.",
"I think ideally, I would want to see this on Atari or some of the continuous control domains often used.",
"If this allows the removing of the target network for instance, in those more difficult tasks, then this would be a huge deal.",
"Additionally, I do not think the current gridworld task adds anything to the experiments, I would rather actually see this on a more interesting linear function approximation on some other simple task like Mountain Car than a neural network on gridworld.",
"The reason this might be interesting is that when the parameter space is lower dimensional (not an issue for neural nets, but could be problematic for linear FA) the constraint might be too much leading to significantly poorer performance.",
"I suspect this is the actual cause for it not converging to zero for Baird*s, although please correct me if I*m wrong on that.",
"As is, I cannot recommend acceptance given the current experiments and lack of theoretical results.",
"But I do think this is a very interesting direction and hope to see more thorough experiments or analysis to support it.",
"Pros: Simple, interesting idea Works well on toy problems, and able to prevent divergence in Baird*s counter-example Cons: Lacking in theoretical analysis or significant experimental results"
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 10,
"praise": 4,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 5,
"total": 13
} | 0.307692 | 0 | 0.307692 | 0.769231 | 0.307692 | 0.153846 | 0.307692 | 0.384615 | 13 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.538462 | 2.538462 | 0 |
TD Learning with Constrained Gradients | Temporal Difference Learning with function approximation is known to be unstable. Previous work like citet{sutton2009fast} and citet{sutton2009convergent} has presented alternative objectives that are stable to minimize. However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly citep{mnih2015human}. In this work we propose a constraint on the TD update that minimizes change to the target values. This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation. We validate this update by applying our technique to deep Q-learning, and training without a target network. We also show that adding this constraint on Baird*s counterexample keeps Q-learning from diverging. | Summary: This paper tackles the issue of combining TD learning methods with function approximation. The proposed algorithm constrains the gradient update to deal with the fact that canonical TD with function approximation ignores the impact of changing the weights on the target of the TD learning rule. Results with linear and non-linear function approximation highlight the attributes of the method. Quality: The quality of the writing, notation, motivation, and results analysis is low. I will give a few examples to highlight the point. The paper motivates that TD is divergent with function approximation, and then goes on to discuss MSPBE methods that have strong convergence results, without addressing why a new approach is needed. There are many missing references: ETD, HTD, mirror-prox methods, retrace, ABQ. Q-sigma. This is a very active area of research and the paper needs to justify their approach. The paper has straightforward technical errors and naive statements: e.g. the equation for the loss of TD takes the norm of a scalar. The paper claims that it is not well-known that TD with function approximation ignores part of the gradient of the MSVE. There are many others. The experiments have serious issues. Exp1 seems to indicate that the new method does not converge to the correct solution. The grid world experiment is not conclusive as important details like the number of episodes and how parameters were chosen was not discussed. Again exp3 provides little information about the experimental setup. Clarity: The clarity of the text is fine, though errors make things difficult sometimes. For example The Bhatnagar 2009 reference should be Maei. Originality: As mentioned above this is a very active research area, and the paper makes little effort to explain why the multitude of existing algorithms are not suitable. Significance: Because of all the things outlined above, the significance is below the bar for this round. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTD Learning with Constrained Gradients\nTemporal Difference Learning with function approximation is known to be unstable. Previous work like citet{sutton2009fast} and citet{sutton2009convergent} has presented alternative objectives that are stable to minimize. However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly citep{mnih2015human}. In this work we propose a constraint on the TD update that minimizes change to the target values. This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation. We validate this update by applying our technique to deep Q-learning, and training without a target network. We also show that adding this constraint on Baird*s counterexample keeps Q-learning from diverging.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: This paper tackles the issue of combining TD learning methods with function approximation. The proposed algorithm constrains the gradient update to deal with the fact that canonical TD with function approximation ignores the impact of changing the weights on the target of the TD learning rule. Results with linear and non-linear function approximation highlight the attributes of the method. Quality: The quality of the writing, notation, motivation, and results analysis is low. I will give a few examples to highlight the point. The paper motivates that TD is divergent with function approximation, and then goes on to discuss MSPBE methods that have strong convergence results, without addressing why a new approach is needed. There are many missing references: ETD, HTD, mirror-prox methods, retrace, ABQ. Q-sigma. This is a very active area of research and the paper needs to justify their approach. The paper has straightforward technical errors and naive statements: e.g. the equation for the loss of TD takes the norm of a scalar. The paper claims that it is not well-known that TD with function approximation ignores part of the gradient of the MSVE. There are many others. The experiments have serious issues. Exp1 seems to indicate that the new method does not converge to the correct solution. The grid world experiment is not conclusive as important details like the number of episodes and how parameters were chosen was not discussed. Again exp3 provides little information about the experimental setup. Clarity: The clarity of the text is fine, though errors make things difficult sometimes. For example The Bhatnagar 2009 reference should be Maei. Originality: As mentioned above this is a very active research area, and the paper makes little effort to explain why the multitude of existing algorithms are not suitable. Significance: Because of all the things outlined above, the significance is below the bar for this round.",
"role": "assistant"
}
]
| [
" Summary: This paper tackles the issue of combining TD learning methods with function approximation.",
"The proposed algorithm constrains the gradient update to deal with the fact that canonical TD with function approximation ignores the impact of changing the weights on the target of the TD learning rule.",
"Results with linear and non-linear function approximation highlight the attributes of the method.",
"Quality: The quality of the writing, notation, motivation, and results analysis is low.",
"I will give a few examples to highlight the point.",
"The paper motivates that TD is divergent with function approximation, and then goes on to discuss MSPBE methods that have strong convergence results, without addressing why a new approach is needed.",
"There are many missing references: ETD, HTD, mirror-prox methods, retrace, ABQ.",
"Q-sigma.",
"This is a very active area of research and the paper needs to justify their approach.",
"The paper has straightforward technical errors and naive statements: e.g.",
"the equation for the loss of TD takes the norm of a scalar.",
"The paper claims that it is not well-known that TD with function approximation ignores part of the gradient of the MSVE.",
"There are many others.",
"The experiments have serious issues.",
"Exp1 seems to indicate that the new method does not converge to the correct solution.",
"The grid world experiment is not conclusive as important details like the number of episodes and how parameters were chosen was not discussed.",
"Again exp3 provides little information about the experimental setup.",
"Clarity: The clarity of the text is fine, though errors make things difficult sometimes.",
"For example The Bhatnagar 2009 reference should be Maei.",
"Originality: As mentioned above this is a very active research area, and the paper makes little effort to explain why the multitude of existing algorithms are not suitable.",
"Significance: Because of all the things outlined above, the significance is below the bar for this round."
] | {
"criticism": 11,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 12,
"praise": 3,
"presentation_and_reporting": 7,
"results_and_discussion": 6,
"suggestion_and_solution": 3,
"total": 21
} | 0.52381 | 0 | 0.142857 | 0.571429 | 0.142857 | 0.333333 | 0.285714 | 0.142857 | 21 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.132782 | 1.010076 |
Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms | The question why deep learning algorithms generalize so well has attracted increasing research interest. However, most of the well-established approaches, such as hypothesis capacity, stability or sparseness, have not provided complete explanations (Zhang et al., 2016; Kawaguchi et al., 2017). In this work, we focus on the robustness approach (Xu & Mannor, 2012), i.e., if the error of a hypothesis will not change much due to perturbations of its training examples, then it will also generalize well. As most deep learning algorithms are stochastic (e.g., Stochastic Gradient Descent, Dropout, and Bayes-by-backprop), we revisit the robustness arguments of Xu & Mannor, and introduce a new approach – ensemble robustness – that concerns the robustness of a population of hypotheses. Through the lens of ensemble robustness, we reveal that a stochastic learning algorithm can generalize well as long as its sensitiveness to adversarial perturbations is bounded in average over training examples. Moreover, an algorithm may be sensitive to some adversarial examples (Goodfellow et al., 2015) but still generalize well. To support our claims, we provide extensive simulations for different deep learning algorithms and different network architectures exhibiting a strong correlation between ensemble robustness and the ability to generalize. | The paper studied the generalization ability of learning algorithms from the robustness viewpoint in a deep learning context. To achieve this goal, the authors extended the notion of the (K, .epsilon)- robustness proposed in Xu and Mannor, 2012 and introduced the ensemble robustness. Pros: 1, The problem studied in this paper is interesting. Both robustness and generalization are important properties of learning algorithms. It is good to see that the authors made some efforts towards this direction. 2, The paper is well shaped and is easy to follow. The analysis conducted in this paper is sound. Numerical experiments are also convincing. 3, The extended notion "ensemble robustness" is shown to be very useful in studying the generalization properties of several deep learning algorithms. Cons: 1, The terminology "ensemble" seems odd to me, and seems not to be informative enough. 2, Given that the stability is considered as a weak notion of robustness, and the fact that the stability of a learning algorithm and its relations to the generalization property have been well studied, in my view, it is quite necessary to mention the relation of the present study with stability arguments. 3, After Definition 3, the author stated that ensemble robustness is a weak notion of robustness proposed in Xu and Manner, 2012. It is better to present an example here immediately to illustrate. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnsemble Robustness and Generalization of Stochastic Deep Learning Algorithms\nThe question why deep learning algorithms generalize so well has attracted increasing research interest. However, most of the well-established approaches, such as hypothesis capacity, stability or sparseness, have not provided complete explanations (Zhang et al., 2016; Kawaguchi et al., 2017). In this work, we focus on the robustness approach (Xu & Mannor, 2012), i.e., if the error of a hypothesis will not change much due to perturbations of its training examples, then it will also generalize well. As most deep learning algorithms are stochastic (e.g., Stochastic Gradient Descent, Dropout, and Bayes-by-backprop), we revisit the robustness arguments of Xu & Mannor, and introduce a new approach – ensemble robustness – that concerns the robustness of a population of hypotheses. Through the lens of ensemble robustness, we reveal that a stochastic learning algorithm can generalize well as long as its sensitiveness to adversarial perturbations is bounded in average over training examples. Moreover, an algorithm may be sensitive to some adversarial examples (Goodfellow et al., 2015) but still generalize well. To support our claims, we provide extensive simulations for different deep learning algorithms and different network architectures exhibiting a strong correlation between ensemble robustness and the ability to generalize.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper studied the generalization ability of learning algorithms from the robustness viewpoint in a deep learning context. To achieve this goal, the authors extended the notion of the (K, .epsilon)- robustness proposed in Xu and Mannor, 2012 and introduced the ensemble robustness. Pros: 1, The problem studied in this paper is interesting. Both robustness and generalization are important properties of learning algorithms. It is good to see that the authors made some efforts towards this direction. 2, The paper is well shaped and is easy to follow. The analysis conducted in this paper is sound. Numerical experiments are also convincing. 3, The extended notion \"ensemble robustness\" is shown to be very useful in studying the generalization properties of several deep learning algorithms. Cons: 1, The terminology \"ensemble\" seems odd to me, and seems not to be informative enough. 2, Given that the stability is considered as a weak notion of robustness, and the fact that the stability of a learning algorithm and its relations to the generalization property have been well studied, in my view, it is quite necessary to mention the relation of the present study with stability arguments. 3, After Definition 3, the author stated that ensemble robustness is a weak notion of robustness proposed in Xu and Manner, 2012. It is better to present an example here immediately to illustrate.",
"role": "assistant"
}
]
| [
" The paper studied the generalization ability of learning algorithms from the robustness viewpoint in a deep learning context.",
"To achieve this goal, the authors extended the notion of the (K, .epsilon)- robustness proposed in Xu and Mannor, 2012 and introduced the ensemble robustness.",
"Pros: 1, The problem studied in this paper is interesting.",
"Both robustness and generalization are important properties of learning algorithms.",
"It is good to see that the authors made some efforts towards this direction.",
"2, The paper is well shaped and is easy to follow.",
"The analysis conducted in this paper is sound.",
"Numerical experiments are also convincing.",
"3, The extended notion \"ensemble robustness\" is shown to be very useful in studying the generalization properties of several deep learning algorithms.",
"Cons: 1, The terminology \"ensemble\" seems odd to me, and seems not to be informative enough.",
"2, Given that the stability is considered as a weak notion of robustness, and the fact that the stability of a learning algorithm and its relations to the generalization property have been well studied, in my view, it is quite necessary to mention the relation of the present study with stability arguments.",
"3, After Definition 3, the author stated that ensemble robustness is a weak notion of robustness proposed in Xu and Manner, 2012.",
"It is better to present an example here immediately to illustrate."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 6,
"materials_and_methods": 7,
"praise": 6,
"presentation_and_reporting": 4,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 13
} | 0.153846 | 0 | 0.461538 | 0.538462 | 0.461538 | 0.307692 | 0.076923 | 0.153846 | 13 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.153846 | 2.153846 | 0 |
Deep Epitome for Unravelling Generalized Hamming Network: A Fuzzy Logic Interpretation of Deep Learning | This paper gives a rigorous analysis of trained Generalized Hamming Networks (GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i.e. stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer. The revealed equivalence, on the theoretical side, can be regarded as a constructive manifestation of the universal approximation theorem Cybenko (1989); Hornik (1991). In practice, it has profound and multi-fold implications. For network visualization, the constructed deep epitomes at each layer provide a visualization of network internal representation that does not rely on the input data. Moreover, deep epitomes allows the direct extraction of features in just one step, without resorting to regularized optimizations used in existing visualization tools. | pros : - interesting topic cons : - experiments are not convincing - writing & clarity could be improved sometimes The paper is a bit hard to read sometimes, because it does not refer to usual notions. First of all, the Equation (1) of the subsection 3.1 could be more precise. Use and not and define the "dot" operation. I have assumed the paper was using real valued networks and the euclidean scalar product. I do not see the interest to use the notation "g" if the circplus is used. The definitions 1 and 2 of 3.2 do not help the reader so maybe they could be removed.. Secondly, it is shown that the operations performed by the network are associative, and equivalent to a single "epitome convolution". I dot not agree with (on page 7) the "manifestation of the universal approximation" theorem. The formulations are rigorously different (2 linear+point wise non linearity), or I do not see the link and thus a comment would be appreciated. Thirdly, there is the "data independent" and "data dependent" visualisation, which correspond, if I*m correct, to a visualisation of the filters and the representations respectively. I do not understand what are the histograms of the normalised deep epitoms, or to which quantity they refer to. Similarly, I do not understand what is displayed on the Figures in the appendix: if I am correct, the equivalent {g_n} are visualised, but this is not specified and there are a lot of redundant filters, which could be commented. The figures of the appendix should be put in the main paper. I am finally very concerned with the experimental section. No classification accuracy is reported. It is important to report it, as one can not judge the technique: it is possible to reach 80% with mathematically predefined filters (wavelets) on CIFAR10, which are interpretable. It is very important to have competitive results w.r.t. classification to convince a reader that the technique is well-founded to study CNN. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Epitome for Unravelling Generalized Hamming Network: A Fuzzy Logic Interpretation of Deep Learning\nThis paper gives a rigorous analysis of trained Generalized Hamming Networks (GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i.e. stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer. The revealed equivalence, on the theoretical side, can be regarded as a constructive manifestation of the universal approximation theorem Cybenko (1989); Hornik (1991). In practice, it has profound and multi-fold implications. For network visualization, the constructed deep epitomes at each layer provide a visualization of network internal representation that does not rely on the input data. Moreover, deep epitomes allows the direct extraction of features in just one step, without resorting to regularized optimizations used in existing visualization tools.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n pros : - interesting topic cons : - experiments are not convincing - writing & clarity could be improved sometimes The paper is a bit hard to read sometimes, because it does not refer to usual notions. First of all, the Equation (1) of the subsection 3.1 could be more precise. Use and not and define the \"dot\" operation. I have assumed the paper was using real valued networks and the euclidean scalar product. I do not see the interest to use the notation \"g\" if the circplus is used. The definitions 1 and 2 of 3.2 do not help the reader so maybe they could be removed.. Secondly, it is shown that the operations performed by the network are associative, and equivalent to a single \"epitome convolution\". I dot not agree with (on page 7) the \"manifestation of the universal approximation\" theorem. The formulations are rigorously different (2 linear+point wise non linearity), or I do not see the link and thus a comment would be appreciated. Thirdly, there is the \"data independent\" and \"data dependent\" visualisation, which correspond, if I*m correct, to a visualisation of the filters and the representations respectively. I do not understand what are the histograms of the normalised deep epitoms, or to which quantity they refer to. Similarly, I do not understand what is displayed on the Figures in the appendix: if I am correct, the equivalent {g_n} are visualised, but this is not specified and there are a lot of redundant filters, which could be commented. The figures of the appendix should be put in the main paper. I am finally very concerned with the experimental section. No classification accuracy is reported. It is important to report it, as one can not judge the technique: it is possible to reach 80% with mathematically predefined filters (wavelets) on CIFAR10, which are interpretable. It is very important to have competitive results w.r.t. classification to convince a reader that the technique is well-founded to study CNN.",
"role": "assistant"
}
]
| [
" pros : - interesting topic cons : - experiments are not convincing - writing & clarity could be improved sometimes The paper is a bit hard to read sometimes, because it does not refer to usual notions.",
"First of all, the Equation (1) of the subsection 3.1 could be more precise.",
"Use and not and define the \"dot\" operation.",
"I have assumed the paper was using real valued networks and the euclidean scalar product.",
"I do not see the interest to use the notation \"g\" if the circplus is used.",
"The definitions 1 and 2 of 3.2 do not help the reader so maybe they could be removed.. Secondly, it is shown that the operations performed by the network are associative, and equivalent to a single \"epitome convolution\".",
"I dot not agree with (on page 7) the \"manifestation of the universal approximation\" theorem.",
"The formulations are rigorously different (2 linear+point wise non linearity), or I do not see the link and thus a comment would be appreciated.",
"Thirdly, there is the \"data independent\" and \"data dependent\" visualisation, which correspond, if I*m correct, to a visualisation of the filters and the representations respectively.",
"I do not understand what are the histograms of the normalised deep epitoms, or to which quantity they refer to.",
"Similarly, I do not understand what is displayed on the Figures in the appendix: if I am correct, the equivalent {g_n} are visualised, but this is not specified and there are a lot of redundant filters, which could be commented.",
"The figures of the appendix should be put in the main paper.",
"I am finally very concerned with the experimental section.",
"No classification accuracy is reported.",
"It is important to report it, as one can not judge the technique: it is possible to reach 80% with mathematically predefined filters (wavelets) on CIFAR10, which are interpretable.",
"It is very important to have competitive results w.r.t.",
"classification to convince a reader that the technique is well-founded to study CNN."
] | {
"criticism": 9,
"example": 4,
"importance_and_relevance": 0,
"materials_and_methods": 8,
"praise": 2,
"presentation_and_reporting": 9,
"results_and_discussion": 2,
"suggestion_and_solution": 8,
"total": 17
} | 0.529412 | 0.235294 | 0 | 0.470588 | 0.117647 | 0.529412 | 0.117647 | 0.470588 | 17 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.470588 | 2.218069 | 0.252519 |
Semi-supervised Outlier Detection using Generative And Adversary Framework | In a conventional binary/multi-class classification task, the decision boundary is supported by data from two or more classes. However, in one-class classification task, only data from one class are available. To build an robust outlier detector using only data from a positive class, we propose a corrupted GAN(CorGAN), a deep convolutional Generative Adversary Network requiring no convergence during training. In the adversarial process of training CorGAN, the Generator is supposed to generate outlier samples for negative class, and the Discriminator as an one-class classifier is trained to distinguish data from training datasets (i.e. positive class) and generated data from the Generator (i.e. negative class). To improve the performance of the Discriminator (one-class classifier), we also propose a lot of techniques to improve the performance of the model. The proposed model outperforms the traditional method PCA + PSVM and the solution based on Autoencoder. | The idea of using GANs for outlier detection is interesting and the problem is relevant. However, I have the following concerns about the quality and the significance: - The proposed formulation in Equation (2) is questionable. The authors say that this is used to generate outliers, and since it will generate inliers when convergence, the authors propose the technique of early stopping in Section 4.1 to avoid convergence. However, then what is learned though the proposed formulation? Since this approach is not straightforward, more theoretical analysis of the proposed method is desirable. - In addition to the above point, I guess the expectation is needed as the original formulation of GAN. Otherwise the proposed formulation does not make sense as it receives only specific data points and how to accumulate objective values across data points is not defined. - In experiments, although the authors say "lots of datasets are used", only two datasets are used, which is not enough to examine the performance of outlier detection methods. Moreover, outliers are artificially generated in these datasets, hence there is no evaluation on pure real-world datasets. To achieve the better quality of the paper, I recommend to add more real-world datasets in experiments. - As discussed in Section 2, there are already many outlier detection methods, such as distance-based outlier detection methods, but they are not compared in experiments. Although the authors argue that distance-based outlier detection methods do not work well for high-dimensional data, this is not always correct. Please see the paper: -- Zimek, A., Schubert, E., Kriegel, H.-P., A survey on unsupervised outlier detection in high-dimensional numerical data, Statistical Analysis and Data Mining (2012) This paper shows that the performance gets even better for higher dimensional data if each feature is relevant. I recommend to add some distance-based outlier detection methods as baselines in experiments. - Since parameter tuning by cross validation cannot be used due to missing information of outliers, it is important to examine the sensitivity of the proposed method with respect to changes in its parameters (a_new, lambda, and others). Otherwise in practice how to set these parameters to get better results is not obvious. * The clarity of this paper is not high as the proposed method is not well explained. In particular, please mathematically formulate each proposed technique in Section 4. * Since the proposed formulation is not convincing due to the above reasons and experimental evaluation is not thorough, the originality is not high. Minor comments: - P.1, L.5 in the third paragraph: architexture -> architecture - What does "Cor" of CorGAN mean? AFTER REVISION Thank you to the authors for their response and revision. Although the paper has been improved, I keep my rating due to the insufficient experimental evaluation. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemi-supervised Outlier Detection using Generative And Adversary Framework\nIn a conventional binary/multi-class classification task, the decision boundary is supported by data from two or more classes. However, in one-class classification task, only data from one class are available. To build an robust outlier detector using only data from a positive class, we propose a corrupted GAN(CorGAN), a deep convolutional Generative Adversary Network requiring no convergence during training. In the adversarial process of training CorGAN, the Generator is supposed to generate outlier samples for negative class, and the Discriminator as an one-class classifier is trained to distinguish data from training datasets (i.e. positive class) and generated data from the Generator (i.e. negative class). To improve the performance of the Discriminator (one-class classifier), we also propose a lot of techniques to improve the performance of the model. The proposed model outperforms the traditional method PCA + PSVM and the solution based on Autoencoder.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The idea of using GANs for outlier detection is interesting and the problem is relevant. However, I have the following concerns about the quality and the significance: - The proposed formulation in Equation (2) is questionable. The authors say that this is used to generate outliers, and since it will generate inliers when convergence, the authors propose the technique of early stopping in Section 4.1 to avoid convergence. However, then what is learned though the proposed formulation? Since this approach is not straightforward, more theoretical analysis of the proposed method is desirable. - In addition to the above point, I guess the expectation is needed as the original formulation of GAN. Otherwise the proposed formulation does not make sense as it receives only specific data points and how to accumulate objective values across data points is not defined. - In experiments, although the authors say \"lots of datasets are used\", only two datasets are used, which is not enough to examine the performance of outlier detection methods. Moreover, outliers are artificially generated in these datasets, hence there is no evaluation on pure real-world datasets. To achieve the better quality of the paper, I recommend to add more real-world datasets in experiments. - As discussed in Section 2, there are already many outlier detection methods, such as distance-based outlier detection methods, but they are not compared in experiments. Although the authors argue that distance-based outlier detection methods do not work well for high-dimensional data, this is not always correct. Please see the paper: -- Zimek, A., Schubert, E., Kriegel, H.-P., A survey on unsupervised outlier detection in high-dimensional numerical data, Statistical Analysis and Data Mining (2012) This paper shows that the performance gets even better for higher dimensional data if each feature is relevant. I recommend to add some distance-based outlier detection methods as baselines in experiments. - Since parameter tuning by cross validation cannot be used due to missing information of outliers, it is important to examine the sensitivity of the proposed method with respect to changes in its parameters (a_new, lambda, and others). Otherwise in practice how to set these parameters to get better results is not obvious. * The clarity of this paper is not high as the proposed method is not well explained. In particular, please mathematically formulate each proposed technique in Section 4. * Since the proposed formulation is not convincing due to the above reasons and experimental evaluation is not thorough, the originality is not high. Minor comments: - P.1, L.5 in the third paragraph: architexture -> architecture - What does \"Cor\" of CorGAN mean? AFTER REVISION Thank you to the authors for their response and revision. Although the paper has been improved, I keep my rating due to the insufficient experimental evaluation.",
"role": "assistant"
}
]
| [
" The idea of using GANs for outlier detection is interesting and the problem is relevant.",
"However, I have the following concerns about the quality and the significance: - The proposed formulation in Equation (2) is questionable.",
"The authors say that this is used to generate outliers, and since it will generate inliers when convergence, the authors propose the technique of early stopping in Section 4.1 to avoid convergence.",
"However, then what is learned though the proposed formulation?",
"Since this approach is not straightforward, more theoretical analysis of the proposed method is desirable.",
"- In addition to the above point, I guess the expectation is needed as the original formulation of GAN.",
"Otherwise the proposed formulation does not make sense as it receives only specific data points and how to accumulate objective values across data points is not defined.",
"- In experiments, although the authors say \"lots of datasets are used\", only two datasets are used, which is not enough to examine the performance of outlier detection methods.",
"Moreover, outliers are artificially generated in these datasets, hence there is no evaluation on pure real-world datasets.",
"To achieve the better quality of the paper, I recommend to add more real-world datasets in experiments.",
"- As discussed in Section 2, there are already many outlier detection methods, such as distance-based outlier detection methods, but they are not compared in experiments.",
"Although the authors argue that distance-based outlier detection methods do not work well for high-dimensional data, this is not always correct.",
"Please see the paper: -- Zimek, A., Schubert, E., Kriegel, H.-P., A survey on unsupervised outlier detection in high-dimensional numerical data, Statistical Analysis and Data Mining (2012) This paper shows that the performance gets even better for higher dimensional data if each feature is relevant.",
"I recommend to add some distance-based outlier detection methods as baselines in experiments.",
"- Since parameter tuning by cross validation cannot be used due to missing information of outliers, it is important to examine the sensitivity of the proposed method with respect to changes in its parameters (a_new, lambda, and others).",
"Otherwise in practice how to set these parameters to get better results is not obvious.",
"* The clarity of this paper is not high as the proposed method is not well explained.",
"In particular, please mathematically formulate each proposed technique in Section 4.",
"* Since the proposed formulation is not convincing due to the above reasons and experimental evaluation is not thorough, the originality is not high.",
"Minor comments: - P.1, L.5 in the third paragraph: architexture -> architecture - What does \"Cor\" of CorGAN mean?",
"AFTER REVISION Thank you to the authors for their response and revision.",
"Although the paper has been improved, I keep my rating due to the insufficient experimental evaluation."
] | {
"criticism": 11,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 17,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 8,
"suggestion_and_solution": 6,
"total": 22
} | 0.5 | 0.090909 | 0.045455 | 0.772727 | 0.090909 | 0.136364 | 0.363636 | 0.272727 | 22 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.272727 | 0.99435 | 1.278377 |
Espresso: Efficient Forward Propagation for Binary Deep Neural Networks | There are many applications scenarios for which the computational performance and memory footprint of the prediction phase of Deep Neural Networks (DNNs) need to be optimized. Binary Deep Neural Networks (BDNNs) have been shown to be an effective way of achieving this objective. In this paper, we show how Convolutional Neural Networks (CNNs) can be implemented using binary representations. Espresso is a compact, yet powerful library written in C/CUDA that features all the functionalities required for the forward propagation of CNNs, in a binary file less than 400KB, without any external dependencies. Although it is mainly designed to take advantage of massive GPU parallelism, Espresso also provides an equivalent CPU implementation for CNNs. Espresso provides special convolutional and dense layers for BCNNs, leveraging bit-packing and bit-wise computations for efficient execution. These techniques provide a speed-up of matrix-multiplication routines, and at the same time, reduce memory usage when storing parameters and activations. We experimentally show that Espresso is significantly faster than existing implementations of optimized binary neural networks (~ 2 orders of magnitude). Espresso is released under the Apache 2.0 license and is available at http://github.com/organization/project. | The paper presents an implementation strategy (with code link anonymized for review) for fast computations of binary forward inference. The paper makes the approach seem straightforward (clever?) and there has been lots of work on fast inference of quantized, low-bit-width neural networks, but if indeed the implementation is significantly faster than commercial alternatives (e.g. from Intel) then I expect the authors have made a novel and useful contribution. The paper is written clearly, but I am not an expert in alternative approaches in this area. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEspresso: Efficient Forward Propagation for Binary Deep Neural Networks\nThere are many applications scenarios for which the computational performance and memory footprint of the prediction phase of Deep Neural Networks (DNNs) need to be optimized. Binary Deep Neural Networks (BDNNs) have been shown to be an effective way of achieving this objective. In this paper, we show how Convolutional Neural Networks (CNNs) can be implemented using binary representations. Espresso is a compact, yet powerful library written in C/CUDA that features all the functionalities required for the forward propagation of CNNs, in a binary file less than 400KB, without any external dependencies. Although it is mainly designed to take advantage of massive GPU parallelism, Espresso also provides an equivalent CPU implementation for CNNs. Espresso provides special convolutional and dense layers for BCNNs, leveraging bit-packing and bit-wise computations for efficient execution. These techniques provide a speed-up of matrix-multiplication routines, and at the same time, reduce memory usage when storing parameters and activations. We experimentally show that Espresso is significantly faster than existing implementations of optimized binary neural networks (~ 2 orders of magnitude). Espresso is released under the Apache 2.0 license and is available at http://github.com/organization/project.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper presents an implementation strategy (with code link anonymized for review) for fast computations of binary forward inference. The paper makes the approach seem straightforward (clever?) and there has been lots of work on fast inference of quantized, low-bit-width neural networks, but if indeed the implementation is significantly faster than commercial alternatives (e.g. from Intel) then I expect the authors have made a novel and useful contribution. The paper is written clearly, but I am not an expert in alternative approaches in this area.",
"role": "assistant"
}
]
| [
" The paper presents an implementation strategy (with code link anonymized for review) for fast computations of binary forward inference.",
"The paper makes the approach seem straightforward (clever?)",
"and there has been lots of work on fast inference of quantized, low-bit-width neural networks, but if indeed the implementation is significantly faster than commercial alternatives (e.g.",
"from Intel) then I expect the authors have made a novel and useful contribution.",
"The paper is written clearly, but I am not an expert in alternative approaches in this area."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 5
} | 0.2 | 0 | 0.4 | 0.8 | 0.6 | 0.2 | 0.2 | 0 | 5 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.4 | 1.389924 | 1.010076 |
Variational Message Passing with Structured Inference Networks | Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational message-passing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE). Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE. Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference. By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods. | The paper seems to be significant since it integrates PGM inference with deep models. Specifically, the idea is to use the structure of the PGM to perform efficient inference. A variational message passing approach is developed which performs natural-gradient updates for the PGM part and stochastic gradient updates for the deep model part. Performance comparison is performed with an existing approach that does not utilize the PGM structure for inference. The paper does a good job of explaining the challenges of inference, and provides a systematic approach to integrating PGMs with deep model updates. As compared to the existing approach where the PGM parameters must converge before updating the DNN parameters, the proposed architecture does not require this, due to the re-parameterization which is an important contribution. The motivation of the paper, and the description of its contribution as compared to existing methods can be improved. One of the main aspects it seems is generality, but the encodings are specific to 2 types PGMs. Can this be generalized to arbitrary PGM structures? How about cases when computing Z is intractable? Could the proposed approach be adapted to such cases. I was not very sure as to why the proposed method is more general than existing approaches. Regarding the experiments, as mentioned in the paper the evaluation is performed on two fairly small scale datasets. the approach shows that the proposed methods converge faster than existing methods. However, I think there is value in the approach, and the connection between variational methods with DNNs is interesting. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVariational Message Passing with Structured Inference Networks\nRecent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational message-passing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE). Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE. Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference. By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper seems to be significant since it integrates PGM inference with deep models. Specifically, the idea is to use the structure of the PGM to perform efficient inference. A variational message passing approach is developed which performs natural-gradient updates for the PGM part and stochastic gradient updates for the deep model part. Performance comparison is performed with an existing approach that does not utilize the PGM structure for inference. The paper does a good job of explaining the challenges of inference, and provides a systematic approach to integrating PGMs with deep model updates. As compared to the existing approach where the PGM parameters must converge before updating the DNN parameters, the proposed architecture does not require this, due to the re-parameterization which is an important contribution. The motivation of the paper, and the description of its contribution as compared to existing methods can be improved. One of the main aspects it seems is generality, but the encodings are specific to 2 types PGMs. Can this be generalized to arbitrary PGM structures? How about cases when computing Z is intractable? Could the proposed approach be adapted to such cases. I was not very sure as to why the proposed method is more general than existing approaches. Regarding the experiments, as mentioned in the paper the evaluation is performed on two fairly small scale datasets. the approach shows that the proposed methods converge faster than existing methods. However, I think there is value in the approach, and the connection between variational methods with DNNs is interesting.",
"role": "assistant"
}
]
| [
" The paper seems to be significant since it integrates PGM inference with deep models.",
"Specifically, the idea is to use the structure of the PGM to perform efficient inference.",
"A variational message passing approach is developed which performs natural-gradient updates for the PGM part and stochastic gradient updates for the deep model part.",
"Performance comparison is performed with an existing approach that does not utilize the PGM structure for inference.",
"The paper does a good job of explaining the challenges of inference, and provides a systematic approach to integrating PGMs with deep model updates.",
"As compared to the existing approach where the PGM parameters must converge before updating the DNN parameters, the proposed architecture does not require this, due to the re-parameterization which is an important contribution.",
"The motivation of the paper, and the description of its contribution as compared to existing methods can be improved.",
"One of the main aspects it seems is generality, but the encodings are specific to 2 types PGMs.",
"Can this be generalized to arbitrary PGM structures?",
"How about cases when computing Z is intractable?",
"Could the proposed approach be adapted to such cases.",
"I was not very sure as to why the proposed method is more general than existing approaches.",
"Regarding the experiments, as mentioned in the paper the evaluation is performed on two fairly small scale datasets.",
"the approach shows that the proposed methods converge faster than existing methods.",
"However, I think there is value in the approach, and the connection between variational methods with DNNs is interesting."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 15,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 5,
"suggestion_and_solution": 2,
"total": 15
} | 0.066667 | 0 | 0.333333 | 1 | 0.266667 | 0.066667 | 0.333333 | 0.133333 | 15 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 2.13687 | 0.06313 |
Sparse Attentive Backtracking: Long-Range Credit Assignment in Recurrent Networks | A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies. Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states. | re. Introduction, page 2: Briefly explain here how SAB is different from regular Attention? Good paper. There*s not that much discussion of the proposed SAB compared to regular Attention, perhaps that could be expanded. Also, I suggest summarizing the experimental findings in the Conclusion. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSparse Attentive Backtracking: Long-Range Credit Assignment in Recurrent Networks\nA major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies. Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n re. Introduction, page 2: Briefly explain here how SAB is different from regular Attention? Good paper. There*s not that much discussion of the proposed SAB compared to regular Attention, perhaps that could be expanded. Also, I suggest summarizing the experimental findings in the Conclusion.",
"role": "assistant"
}
]
| [
" re.",
"Introduction, page 2: Briefly explain here how SAB is different from regular Attention?",
"Good paper.",
"There*s not that much discussion of the proposed SAB compared to regular Attention, perhaps that could be expanded.",
"Also, I suggest summarizing the experimental findings in the Conclusion."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 1,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 5
} | 0.2 | 0.2 | 0 | 0.2 | 0.2 | 0.4 | 0.4 | 0.6 | 5 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.2 | 1.189924 | 1.010076 |
Learning from Between-class Examples for Deep Sound Recognition | Deep learning methods have achieved high performance in sound recognition tasks. Deciding how to feed the training data is important for further performance improvement. We propose a novel learning method for deep sound recognition: Between-Class learning (BC learning). Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds. We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the model to output the mixing ratio. The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes. The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we construct a new deep sound recognition network (EnvNet-v2) and train it with BC learning. As a result, we achieved a performance surpasses the human level. | The propose data augmentation and BC learning is relevant, much robust than frequency jitter or simple data augmentation. In equation 2, please check the measure of the mixture. Why not simply use a dB criteria ? The comments about applying a CNN to local features or novel approach to increase sound recognition could be completed with some ICLR 2017 work towards injected priors using Chirplet Transform. The authors might discuss more how to extend their model to image recognition, or at least of other modalities as suggested. Section 3.2.2 shall be placed later on, and clarified. Discussion on mixing more than two sounds leads could be completed by associative properties, we think... ? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning from Between-class Examples for Deep Sound Recognition\nDeep learning methods have achieved high performance in sound recognition tasks. Deciding how to feed the training data is important for further performance improvement. We propose a novel learning method for deep sound recognition: Between-Class learning (BC learning). Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds. We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the model to output the mixing ratio. The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes. The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we construct a new deep sound recognition network (EnvNet-v2) and train it with BC learning. As a result, we achieved a performance surpasses the human level.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The propose data augmentation and BC learning is relevant, much robust than frequency jitter or simple data augmentation. In equation 2, please check the measure of the mixture. Why not simply use a dB criteria ? The comments about applying a CNN to local features or novel approach to increase sound recognition could be completed with some ICLR 2017 work towards injected priors using Chirplet Transform. The authors might discuss more how to extend their model to image recognition, or at least of other modalities as suggested. Section 3.2.2 shall be placed later on, and clarified. Discussion on mixing more than two sounds leads could be completed by associative properties, we think... ?",
"role": "assistant"
}
]
| [
" The propose data augmentation and BC learning is relevant, much robust than frequency jitter or simple data augmentation.",
"In equation 2, please check the measure of the mixture.",
"Why not simply use a dB criteria ?",
"The comments about applying a CNN to local features or novel approach to increase sound recognition could be completed with some ICLR 2017 work towards injected priors using Chirplet Transform.",
"The authors might discuss more how to extend their model to image recognition, or at least of other modalities as suggested.",
"Section 3.2.2 shall be placed later on, and clarified.",
"Discussion on mixing more than two sounds leads could be completed by associative properties, we think... ?"
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 6,
"total": 7
} | 0 | 0 | 0 | 0.857143 | 0.142857 | 0.285714 | 0.142857 | 0.857143 | 7 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.285714 | 1.717547 | 0.568167 |
Style Memory: Making a Classifier Network Generative | Deep networks have shown great performance in classification tasks. However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification. We introduce a network that has the capacity to do both classification and reconstruction by adding a "style memory" to the output layer of the network. We also show how to train such a neural network as a deep multi-layer autoencoder, jointly minimizing both classification and reconstruction losses. The generative capacity of our network demonstrates that the combination of style-memory neurons with the classifier neurons yield good reconstructions of the inputs when the classification is correct. We further investigate the nature of the style memory, and how it relates to composing digits and letters. | The paper proposes combining classification-specific neural networks with auto-encoders. This is done in a straightforward manner by designating a few nodes in the output layer for classification and few for reconstruction. The training objective is then changed to minimize the sum of the classification loss (as measured by cross-entropy for instance) and the reconstruction error (as measured by ell-2 error as is done in training auto-encoders). The authors minimize the loss function by greedy layer-wise training as is done in several prior works. The authors then perform other experiments on the learned representations in the output layer (those corresponding to classification + those corresponding to reconstruction). For example, the authors plot the nearest-neighbors for classification-features and for reconstruction-features and observe that the two are very different. The authors also observe that interpolating between two reconstruction-feature vectors (by convex combinations) seems to interpolate well between the two corresponding images. While the experimental results are interesting they are not striking especially when viewed in the context of the tremendous amount of work on auto-encoders. Training the classification-features along with reconstruction-features does not seem to give any significantly new insights. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nStyle Memory: Making a Classifier Network Generative\nDeep networks have shown great performance in classification tasks. However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification. We introduce a network that has the capacity to do both classification and reconstruction by adding a \"style memory\" to the output layer of the network. We also show how to train such a neural network as a deep multi-layer autoencoder, jointly minimizing both classification and reconstruction losses. The generative capacity of our network demonstrates that the combination of style-memory neurons with the classifier neurons yield good reconstructions of the inputs when the classification is correct. We further investigate the nature of the style memory, and how it relates to composing digits and letters.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes combining classification-specific neural networks with auto-encoders. This is done in a straightforward manner by designating a few nodes in the output layer for classification and few for reconstruction. The training objective is then changed to minimize the sum of the classification loss (as measured by cross-entropy for instance) and the reconstruction error (as measured by ell-2 error as is done in training auto-encoders). The authors minimize the loss function by greedy layer-wise training as is done in several prior works. The authors then perform other experiments on the learned representations in the output layer (those corresponding to classification + those corresponding to reconstruction). For example, the authors plot the nearest-neighbors for classification-features and for reconstruction-features and observe that the two are very different. The authors also observe that interpolating between two reconstruction-feature vectors (by convex combinations) seems to interpolate well between the two corresponding images. While the experimental results are interesting they are not striking especially when viewed in the context of the tremendous amount of work on auto-encoders. Training the classification-features along with reconstruction-features does not seem to give any significantly new insights.",
"role": "assistant"
}
]
| [
" The paper proposes combining classification-specific neural networks with auto-encoders.",
"This is done in a straightforward manner by designating a few nodes in the output layer for classification and few for reconstruction.",
"The training objective is then changed to minimize the sum of the classification loss (as measured by cross-entropy for instance) and the reconstruction error (as measured by ell-2 error as is done in training auto-encoders).",
"The authors minimize the loss function by greedy layer-wise training as is done in several prior works.",
"The authors then perform other experiments on the learned representations in the output layer (those corresponding to classification + those corresponding to reconstruction).",
"For example, the authors plot the nearest-neighbors for classification-features and for reconstruction-features and observe that the two are very different.",
"The authors also observe that interpolating between two reconstruction-feature vectors (by convex combinations) seems to interpolate well between the two corresponding images.",
"While the experimental results are interesting they are not striking especially when viewed in the context of the tremendous amount of work on auto-encoders.",
"Training the classification-features along with reconstruction-features does not seem to give any significantly new insights."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 9
} | 0.222222 | 0.111111 | 0.222222 | 1 | 0.222222 | 0 | 0.333333 | 0.111111 | 9 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 2.222222 | 1.969703 | 0.252519 |
The Context-Aware Learner | One important aspect of generalization in machine learning involves reasoning about previously seen data in new settings. Such reasoning requires learning disentangled representations of data which are interpretable in isolation, but can also be combined in a new, unseen scenario. To this end, we introduce the context-aware learner, a model based on the variational autoencoding framework, which can learn such representations across data sets exhibiting a number of distinct contexts. Moreover, it is successfully able to combine these representations to generate data not seen at training time. The model enjoys an exponential increase in representational ability for a linear increase in context count. We demonstrate that the theory readily extends to a meta-learning setting such as this, and describe a fully unsupervised model in complete generality. Finally, we validate our approach using an adaptation with weak supervision. | The authors propose an extension to the Neural Statistician which can model contexts with multiple partially overlapping features. This model can explain datasets by taking into account covariate structure needed to explain away factors of variation and it can also share this structure partially between datasets. A particularly interesting aspect of this model is the fact that it can learn these context c as features conditioned on meta-context a, which leads to a disentangled representation. This is also not dissimilar to ideas used in *Bayesian Representation Learning With Oracle Constraints* Karaletsos et al 2016 where similar contextual features c are learned to disentangle representations over observations and implicit supervision. The authors provide a clean variational inference algorithm to learn their model. However, a key problem is the following: the nature of the discrete variables being used makes them hard to be inferred with variational inference. The authors mention categorical reparametrization as their trick of choice, but do not go into empirical details int heir experiments regarding the success of this approach. In fact, it would be interesting to study which level of these variables could be analytically collapsed (such as done in the Semi-Supervised learning work by Kingma et al 2014) and which ones can be sampled effectively using a form of reparametrization. This also touches on the main criticism of the paper: While the model technically makes sense and is cleanly described and derived, the empirical evaluation is on the weak side and the rich properties of the model are not really shown off. It would be interesting if the authors could consider adding a more illustrative experiment and some more empirical results regarding inference in this model and the marginal structures that can be learned with this model in controlled toy settings. Can the model recover richer structure that was imposed during data generation? How limiting is the learning of a? How does the likelihood of the model behave under the circumstances? The experiments do not really convey how well this all will work in practice. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Context-Aware Learner\nOne important aspect of generalization in machine learning involves reasoning about previously seen data in new settings. Such reasoning requires learning disentangled representations of data which are interpretable in isolation, but can also be combined in a new, unseen scenario. To this end, we introduce the context-aware learner, a model based on the variational autoencoding framework, which can learn such representations across data sets exhibiting a number of distinct contexts. Moreover, it is successfully able to combine these representations to generate data not seen at training time. The model enjoys an exponential increase in representational ability for a linear increase in context count. We demonstrate that the theory readily extends to a meta-learning setting such as this, and describe a fully unsupervised model in complete generality. Finally, we validate our approach using an adaptation with weak supervision.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors propose an extension to the Neural Statistician which can model contexts with multiple partially overlapping features. This model can explain datasets by taking into account covariate structure needed to explain away factors of variation and it can also share this structure partially between datasets. A particularly interesting aspect of this model is the fact that it can learn these context c as features conditioned on meta-context a, which leads to a disentangled representation. This is also not dissimilar to ideas used in *Bayesian Representation Learning With Oracle Constraints* Karaletsos et al 2016 where similar contextual features c are learned to disentangle representations over observations and implicit supervision. The authors provide a clean variational inference algorithm to learn their model. However, a key problem is the following: the nature of the discrete variables being used makes them hard to be inferred with variational inference. The authors mention categorical reparametrization as their trick of choice, but do not go into empirical details int heir experiments regarding the success of this approach. In fact, it would be interesting to study which level of these variables could be analytically collapsed (such as done in the Semi-Supervised learning work by Kingma et al 2014) and which ones can be sampled effectively using a form of reparametrization. This also touches on the main criticism of the paper: While the model technically makes sense and is cleanly described and derived, the empirical evaluation is on the weak side and the rich properties of the model are not really shown off. It would be interesting if the authors could consider adding a more illustrative experiment and some more empirical results regarding inference in this model and the marginal structures that can be learned with this model in controlled toy settings. Can the model recover richer structure that was imposed during data generation? How limiting is the learning of a? How does the likelihood of the model behave under the circumstances? The experiments do not really convey how well this all will work in practice.",
"role": "assistant"
}
]
| [
" The authors propose an extension to the Neural Statistician which can model contexts with multiple partially overlapping features.",
"This model can explain datasets by taking into account covariate structure needed to explain away factors of variation and it can also share this structure partially between datasets.",
"A particularly interesting aspect of this model is the fact that it can learn these context c as features conditioned on meta-context a, which leads to a disentangled representation.",
"This is also not dissimilar to ideas used in *Bayesian Representation Learning With Oracle Constraints* Karaletsos et al 2016 where similar contextual features c are learned to disentangle representations over observations and implicit supervision.",
"The authors provide a clean variational inference algorithm to learn their model.",
"However, a key problem is the following: the nature of the discrete variables being used makes them hard to be inferred with variational inference.",
"The authors mention categorical reparametrization as their trick of choice, but do not go into empirical details int heir experiments regarding the success of this approach.",
"In fact, it would be interesting to study which level of these variables could be analytically collapsed (such as done in the Semi-Supervised learning work by Kingma et al 2014) and which ones can be sampled effectively using a form of reparametrization.",
"This also touches on the main criticism of the paper: While the model technically makes sense and is cleanly described and derived, the empirical evaluation is on the weak side and the rich properties of the model are not really shown off.",
"It would be interesting if the authors could consider adding a more illustrative experiment and some more empirical results regarding inference in this model and the marginal structures that can be learned with this model in controlled toy settings.",
"Can the model recover richer structure that was imposed during data generation?",
"How limiting is the learning of a?",
"How does the likelihood of the model behave under the circumstances?",
"The experiments do not really convey how well this all will work in practice."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 12,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 5,
"suggestion_and_solution": 3,
"total": 14
} | 0.357143 | 0 | 0.214286 | 0.857143 | 0.071429 | 0.142857 | 0.357143 | 0.214286 | 14 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.214286 | 2.198503 | 0.015782 |
Neural Networks with Block Diagonal Inner Product Layers | Artificial neural networks have opened up a world of possibilities in data science and artificial intelligence, but neural networks are cumbersome tools that grow with the complexity of the learning problem. We make contributions to this issue by considering a modified version of the fully connected layer we call a block diagonal inner product layer. These modified layers have weight matrices that are block diagonal, turning a single fully connected layer into a set of densely connected neuron groups. This idea is a natural extension of group, or depthwise separable, convolutional layers applied to the fully connected layers. Block diagonal inner product layers can be achieved by either initializing a purely block diagonal weight matrix or by iteratively pruning off diagonal block entries. This method condenses network storage and speeds up the run time without significant adverse effect on the testing accuracy, thus offering a new approach to improve network computation efficiency. | This is a mostly experimental paper which evaluates the capabilities of neural networks with weight matrices that are block diagonal. The authors describe two methods to obtain this structure: (1) enforced during training, (2) enforced through regularization and pruning. As a second contribution, the authors show experimentally that the random matrix theory can provide a good model of the spectral behavior of the weight matrix when it is large. However, the authors only conjecture as to the potential of this method without describing clear ways of approaching this subject, which somewhat lessens the strength of their argument. Quality: this paper is of good quality Clarity: this paper is clear, but would benefit from better figures and from tables to report the numerical results instead of inserting them into plain text. Originality: this paper introduces block diagonal matrices to structure the weights of a neural network. The idea of structured matrices in this context is not new, but the diagonal block structure appears to be. Significance: This paper is somewhat significant. PROS - A new approach to analyzing the behavior of weight matrices during learning - A new structure for weight matrices that provides good performance while reducing matrix storage requirements and speeding up forward and backward passes. CONS - Some of the figures are hard to read (in particular Fig 1 & 2 left) and would benefit from a better layout. - It would be valuable to see experiments on bigger datasets than only MNIST and CIFAR-10. - I understand that the main advantage of this method is the speedup; however, providing the final accuracy as a function of the nonzero entries for slower methods (e.g. the sparse pruning showed in Fig 1. a) would provide a more complete picture. Main questions: - Could you briefly comment on the training time in section 4.1? - Could you elaborate on the last sentence of section 4.1? - You state: "singular values of an IP layer behave according to the MP distribution even after 1000s of training iterations." Is this a known fact, or something that you observed empirically? In practice, how large must the weight matrix be to observe this behavior? Nitpicks: - I believe the term "fully connected" is more standard than "inner product" and would add clarity to the paper, but I may be mistaken. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Networks with Block Diagonal Inner Product Layers\nArtificial neural networks have opened up a world of possibilities in data science and artificial intelligence, but neural networks are cumbersome tools that grow with the complexity of the learning problem. We make contributions to this issue by considering a modified version of the fully connected layer we call a block diagonal inner product layer. These modified layers have weight matrices that are block diagonal, turning a single fully connected layer into a set of densely connected neuron groups. This idea is a natural extension of group, or depthwise separable, convolutional layers applied to the fully connected layers. Block diagonal inner product layers can be achieved by either initializing a purely block diagonal weight matrix or by iteratively pruning off diagonal block entries. This method condenses network storage and speeds up the run time without significant adverse effect on the testing accuracy, thus offering a new approach to improve network computation efficiency.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This is a mostly experimental paper which evaluates the capabilities of neural networks with weight matrices that are block diagonal. The authors describe two methods to obtain this structure: (1) enforced during training, (2) enforced through regularization and pruning. As a second contribution, the authors show experimentally that the random matrix theory can provide a good model of the spectral behavior of the weight matrix when it is large. However, the authors only conjecture as to the potential of this method without describing clear ways of approaching this subject, which somewhat lessens the strength of their argument. Quality: this paper is of good quality Clarity: this paper is clear, but would benefit from better figures and from tables to report the numerical results instead of inserting them into plain text. Originality: this paper introduces block diagonal matrices to structure the weights of a neural network. The idea of structured matrices in this context is not new, but the diagonal block structure appears to be. Significance: This paper is somewhat significant. PROS - A new approach to analyzing the behavior of weight matrices during learning - A new structure for weight matrices that provides good performance while reducing matrix storage requirements and speeding up forward and backward passes. CONS - Some of the figures are hard to read (in particular Fig 1 & 2 left) and would benefit from a better layout. - It would be valuable to see experiments on bigger datasets than only MNIST and CIFAR-10. - I understand that the main advantage of this method is the speedup; however, providing the final accuracy as a function of the nonzero entries for slower methods (e.g. the sparse pruning showed in Fig 1. a) would provide a more complete picture. Main questions: - Could you briefly comment on the training time in section 4.1? - Could you elaborate on the last sentence of section 4.1? - You state: \"singular values of an IP layer behave according to the MP distribution even after 1000s of training iterations.\" Is this a known fact, or something that you observed empirically? In practice, how large must the weight matrix be to observe this behavior? Nitpicks: - I believe the term \"fully connected\" is more standard than \"inner product\" and would add clarity to the paper, but I may be mistaken.",
"role": "assistant"
}
]
| [
" This is a mostly experimental paper which evaluates the capabilities of neural networks with weight matrices that are block diagonal.",
"The authors describe two methods to obtain this structure: (1) enforced during training, (2) enforced through regularization and pruning.",
"As a second contribution, the authors show experimentally that the random matrix theory can provide a good model of the spectral behavior of the weight matrix when it is large.",
"However, the authors only conjecture as to the potential of this method without describing clear ways of approaching this subject, which somewhat lessens the strength of their argument.",
"Quality: this paper is of good quality Clarity: this paper is clear, but would benefit from better figures and from tables to report the numerical results instead of inserting them into plain text.",
"Originality: this paper introduces block diagonal matrices to structure the weights of a neural network.",
"The idea of structured matrices in this context is not new, but the diagonal block structure appears to be.",
"Significance: This paper is somewhat significant.",
"PROS - A new approach to analyzing the behavior of weight matrices during learning - A new structure for weight matrices that provides good performance while reducing matrix storage requirements and speeding up forward and backward passes.",
"CONS - Some of the figures are hard to read (in particular Fig 1 & 2 left) and would benefit from a better layout.",
"- It would be valuable to see experiments on bigger datasets than only MNIST and CIFAR-10.",
"- I understand that the main advantage of this method is the speedup; however, providing the final accuracy as a function of the nonzero entries for slower methods (e.g.",
"the sparse pruning showed in Fig 1. a) would provide a more complete picture.",
"Main questions: - Could you briefly comment on the training time in section 4.1?",
"- Could you elaborate on the last sentence of section 4.1?",
"- You state: \"singular values of an IP layer behave according to the MP distribution even after 1000s of training iterations.\"",
"Is this a known fact, or something that you observed empirically?",
"In practice, how large must the weight matrix be to observe this behavior?",
"Nitpicks: - I believe the term \"fully connected\" is more standard than \"inner product\" and would add clarity to the paper, but I may be mistaken."
] | {
"criticism": 4,
"example": 3,
"importance_and_relevance": 4,
"materials_and_methods": 10,
"praise": 4,
"presentation_and_reporting": 6,
"results_and_discussion": 5,
"suggestion_and_solution": 7,
"total": 19
} | 0.210526 | 0.157895 | 0.210526 | 0.526316 | 0.210526 | 0.315789 | 0.263158 | 0.368421 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.263158 | 1.69499 | 0.568167 |
On the difference between building and extracting patterns: a causal analysis of deep generative models. | Generative models are important tools to capture and investigate the properties of complex empirical data. Recent developments such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) use two very similar, but extit{reverse}, deep convolutional architectures, one to generate and one to extract information from data. Does learning the parameters of both architectures obey the same rules? We exploit the causality principle of independence of mechanisms to quantify how the weights of successive layers adapt to each other. Using the recently introduced Spectral Independence Criterion, we quantify the dependencies between the kernels of successive convolutional layers and show that those are more independent for the generative process than for information extraction, in line with results from the field of causal inference. In addition, our experiments on generation of human faces suggest that more independence between successive layers of generators results in improved performance of these architectures. | This work exploits the causality principle to quantify how the weights of successive layers adapt to each other. Some interesting results are obtained, such as "enforcing more independence between successive layers of generators may lead to better performance and modularity of these architectures" . Generally, the result is interesting and the presentation is easy to follow. However, the proposed approach and the experiments are not convincible enough. For example, it is hard to obtain the conclusion "more independence lead to better performance" from the experimental results. Maybe more justifications are needed. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the difference between building and extracting patterns: a causal analysis of deep generative models.\nGenerative models are important tools to capture and investigate the properties of complex empirical data. Recent developments such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) use two very similar, but \textit{reverse}, deep convolutional architectures, one to generate and one to extract information from data. Does learning the parameters of both architectures obey the same rules? We exploit the causality principle of independence of mechanisms to quantify how the weights of successive layers adapt to each other. Using the recently introduced Spectral Independence Criterion, we quantify the dependencies between the kernels of successive convolutional layers and show that those are more independent for the generative process than for information extraction, in line with results from the field of causal inference. In addition, our experiments on generation of human faces suggest that more independence between successive layers of generators results in improved performance of these architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This work exploits the causality principle to quantify how the weights of successive layers adapt to each other. Some interesting results are obtained, such as \"enforcing more independence between successive layers of generators may lead to better performance and modularity of these architectures\" . Generally, the result is interesting and the presentation is easy to follow. However, the proposed approach and the experiments are not convincible enough. For example, it is hard to obtain the conclusion \"more independence lead to better performance\" from the experimental results. Maybe more justifications are needed.",
"role": "assistant"
}
]
| [
" This work exploits the causality principle to quantify how the weights of successive layers adapt to each other.",
"Some interesting results are obtained, such as \"enforcing more independence between successive layers of generators may lead to better performance and modularity of these architectures\" .",
"Generally, the result is interesting and the presentation is easy to follow.",
"However, the proposed approach and the experiments are not convincible enough.",
"For example, it is hard to obtain the conclusion \"more independence lead to better performance\" from the experimental results.",
"Maybe more justifications are needed."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 2,
"total": 6
} | 0.333333 | 0.166667 | 0.166667 | 0.5 | 0.333333 | 0.333333 | 0.666667 | 0.333333 | 6 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.833333 | 2.059994 | 0.773339 |
On the difference between building and extracting patterns: a causal analysis of deep generative models. | Generative models are important tools to capture and investigate the properties of complex empirical data. Recent developments such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) use two very similar, but extit{reverse}, deep convolutional architectures, one to generate and one to extract information from data. Does learning the parameters of both architectures obey the same rules? We exploit the causality principle of independence of mechanisms to quantify how the weights of successive layers adapt to each other. Using the recently introduced Spectral Independence Criterion, we quantify the dependencies between the kernels of successive convolutional layers and show that those are more independent for the generative process than for information extraction, in line with results from the field of causal inference. In addition, our experiments on generation of human faces suggest that more independence between successive layers of generators results in improved performance of these architectures. | This paper examines the nature of convolutional filters in the encoder and a decoder of a VAE, and a generator and a discriminator of a GAN. The authors treat the inputs (X) and outputs (Y) of each filter throughout each step of the convolving process as a time series, which allows them to do a Discrete Time Fourier Transform analysis of the resulting sequences. By comparing the power spectral density of the input and the output, they get a Spectral Dependency Ratio (SDR) ratio that characterises a filter as spectrally independent (neutral), correlating (amplifies certain frequencies), or anti-correlating (dampens frequencies). This analysis is performed in the context of the Independence of Cause and Mechanism (ICM) framework. The authors claim that their analysis demonstrates a different characterisation of the inference/discriminator and generative networks in VAE and GAN, whereby the former are anti-causal and the latter are causal in line with the ICM framework. They also claim that this analysis can be used to improve the performance of the models. Pros: -- SDR characterisation of the convolutional filters is interesting -- The authors show that filters with different characteristics are responsible for different aspects of image modelling Cons: -- The authors do not actually demonstrate how their analysis can be used to improve VAEs or GANs -- Their proposed SDR analysis does not actually find much difference between the generator and the discriminator of the GAN -- The clarity of the writing could be improved (e.g. the discussion in section 3.1 seems inaccurate in the current form). Grammatical and spelling mistake are frequent. More background information could be helpful in section 2.2. All figures (but in particular Figure 3) need more informative captions -- The authors talk a lot about disentangling in the introduction, but this does not seem to be followed up in the rest of the text. Furthermore, they are missing a reference to beta-VAE (Higgins et al, 2017) when discussing VAE-based approaches to disentangled factor learning In summary, the paper is not ready for publication in its current form. The authors are advised to use the insights from their proposed SDR analysis to demonstrate quantifiable improvements the VAEs/GANs. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the difference between building and extracting patterns: a causal analysis of deep generative models.\nGenerative models are important tools to capture and investigate the properties of complex empirical data. Recent developments such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) use two very similar, but \textit{reverse}, deep convolutional architectures, one to generate and one to extract information from data. Does learning the parameters of both architectures obey the same rules? We exploit the causality principle of independence of mechanisms to quantify how the weights of successive layers adapt to each other. Using the recently introduced Spectral Independence Criterion, we quantify the dependencies between the kernels of successive convolutional layers and show that those are more independent for the generative process than for information extraction, in line with results from the field of causal inference. In addition, our experiments on generation of human faces suggest that more independence between successive layers of generators results in improved performance of these architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper examines the nature of convolutional filters in the encoder and a decoder of a VAE, and a generator and a discriminator of a GAN. The authors treat the inputs (X) and outputs (Y) of each filter throughout each step of the convolving process as a time series, which allows them to do a Discrete Time Fourier Transform analysis of the resulting sequences. By comparing the power spectral density of the input and the output, they get a Spectral Dependency Ratio (SDR) ratio that characterises a filter as spectrally independent (neutral), correlating (amplifies certain frequencies), or anti-correlating (dampens frequencies). This analysis is performed in the context of the Independence of Cause and Mechanism (ICM) framework. The authors claim that their analysis demonstrates a different characterisation of the inference/discriminator and generative networks in VAE and GAN, whereby the former are anti-causal and the latter are causal in line with the ICM framework. They also claim that this analysis can be used to improve the performance of the models. Pros: -- SDR characterisation of the convolutional filters is interesting -- The authors show that filters with different characteristics are responsible for different aspects of image modelling Cons: -- The authors do not actually demonstrate how their analysis can be used to improve VAEs or GANs -- Their proposed SDR analysis does not actually find much difference between the generator and the discriminator of the GAN -- The clarity of the writing could be improved (e.g. the discussion in section 3.1 seems inaccurate in the current form). Grammatical and spelling mistake are frequent. More background information could be helpful in section 2.2. All figures (but in particular Figure 3) need more informative captions -- The authors talk a lot about disentangling in the introduction, but this does not seem to be followed up in the rest of the text. Furthermore, they are missing a reference to beta-VAE (Higgins et al, 2017) when discussing VAE-based approaches to disentangled factor learning In summary, the paper is not ready for publication in its current form. The authors are advised to use the insights from their proposed SDR analysis to demonstrate quantifiable improvements the VAEs/GANs.",
"role": "assistant"
}
]
| [
" This paper examines the nature of convolutional filters in the encoder and a decoder of a VAE, and a generator and a discriminator of a GAN.",
"The authors treat the inputs (X) and outputs (Y) of each filter throughout each step of the convolving process as a time series, which allows them to do a Discrete Time Fourier Transform analysis of the resulting sequences.",
"By comparing the power spectral density of the input and the output, they get a Spectral Dependency Ratio (SDR) ratio that characterises a filter as spectrally independent (neutral), correlating (amplifies certain frequencies), or anti-correlating (dampens frequencies).",
"This analysis is performed in the context of the Independence of Cause and Mechanism (ICM) framework.",
"The authors claim that their analysis demonstrates a different characterisation of the inference/discriminator and generative networks in VAE and GAN, whereby the former are anti-causal and the latter are causal in line with the ICM framework.",
"They also claim that this analysis can be used to improve the performance of the models.",
"Pros: -- SDR characterisation of the convolutional filters is interesting -- The authors show that filters with different characteristics are responsible for different aspects of image modelling Cons: -- The authors do not actually demonstrate how their analysis can be used to improve VAEs or GANs -- Their proposed SDR analysis does not actually find much difference between the generator and the discriminator of the GAN -- The clarity of the writing could be improved (e.g.",
"the discussion in section 3.1 seems inaccurate in the current form).",
"Grammatical and spelling mistake are frequent.",
"More background information could be helpful in section 2.2.",
"All figures (but in particular Figure 3) need more informative captions -- The authors talk a lot about disentangling in the introduction, but this does not seem to be followed up in the rest of the text.",
"Furthermore, they are missing a reference to beta-VAE (Higgins et al, 2017) when discussing VAE-based approaches to disentangled factor learning In summary, the paper is not ready for publication in its current form.",
"The authors are advised to use the insights from their proposed SDR analysis to demonstrate quantifiable improvements the VAEs/GANs."
] | {
"criticism": 5,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 8,
"praise": 1,
"presentation_and_reporting": 4,
"results_and_discussion": 4,
"suggestion_and_solution": 5,
"total": 13
} | 0.384615 | 0.076923 | 0 | 0.615385 | 0.076923 | 0.307692 | 0.307692 | 0.384615 | 13 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.153846 | 2.153846 | 0 |
Latent Topic Conversational Models | Despite much success in many large-scale language tasks, sequence-to-sequence (seq2seq) models have not been an ideal choice for conversational modeling as they tend to generate generic and repetitive responses. In this paper, we propose a Latent Topic Conversational Model (LTCM) that augments the seq2seq model with a neural topic component to better model human-human conversations. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model to improve generation at each time step. The experimental results show that the proposed LTCM can generate more diverse and interesting responses by sampling from its learnt latent representations. In a subjective human evaluation, the judges also confirm that LTCM is the preferred option comparing to competitive baseline models. | I enjoyed this paper a lot. The paper addresses the issue of enduring topicality in conversation models. The model proposed here is basically a mash-up between a neural topic model and a seq2seq-based dialog system. The exposition is relatively clear and a reader with sufficient background in ML should have no following the model. My only concern about the paper is that is very incremental in nature -- the authors combine two separate models into a relatively straight-forward way. The results do are good and validate the approach, but the paper has little to offer beyond that. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLatent Topic Conversational Models\nDespite much success in many large-scale language tasks, sequence-to-sequence (seq2seq) models have not been an ideal choice for conversational modeling as they tend to generate generic and repetitive responses. In this paper, we propose a Latent Topic Conversational Model (LTCM) that augments the seq2seq model with a neural topic component to better model human-human conversations. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model to improve generation at each time step. The experimental results show that the proposed LTCM can generate more diverse and interesting responses by sampling from its learnt latent representations. In a subjective human evaluation, the judges also confirm that LTCM is the preferred option comparing to competitive baseline models.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n I enjoyed this paper a lot. The paper addresses the issue of enduring topicality in conversation models. The model proposed here is basically a mash-up between a neural topic model and a seq2seq-based dialog system. The exposition is relatively clear and a reader with sufficient background in ML should have no following the model. My only concern about the paper is that is very incremental in nature -- the authors combine two separate models into a relatively straight-forward way. The results do are good and validate the approach, but the paper has little to offer beyond that.",
"role": "assistant"
}
]
| [
" I enjoyed this paper a lot.",
"The paper addresses the issue of enduring topicality in conversation models.",
"The model proposed here is basically a mash-up between a neural topic model and a seq2seq-based dialog system.",
"The exposition is relatively clear and a reader with sufficient background in ML should have no following the model.",
"My only concern about the paper is that is very incremental in nature -- the authors combine two separate models into a relatively straight-forward way.",
"The results do are good and validate the approach, but the paper has little to offer beyond that."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 6
} | 0.333333 | 0 | 0.333333 | 0.833333 | 0.666667 | 0.166667 | 0.166667 | 0.166667 | 6 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.666667 | 1.893328 | 0.773339 |
Latent Topic Conversational Models | Despite much success in many large-scale language tasks, sequence-to-sequence (seq2seq) models have not been an ideal choice for conversational modeling as they tend to generate generic and repetitive responses. In this paper, we propose a Latent Topic Conversational Model (LTCM) that augments the seq2seq model with a neural topic component to better model human-human conversations. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model to improve generation at each time step. The experimental results show that the proposed LTCM can generate more diverse and interesting responses by sampling from its learnt latent representations. In a subjective human evaluation, the judges also confirm that LTCM is the preferred option comparing to competitive baseline models. | The paper proposes a conversational model with topical information, by combining seq2seq model with neural topic models. The experiments and human evaluation show the model outperform some the baseline model seq2seq and the other latent variable model variant of seq2seq. The paper is interesting, but it also has certain limitations: 1) To my understanding, it is a straightforward combination of seq2seq and one of the neural topic models without any justification. 2) The evaluation doesn*t show how the topic information could influence word generation. No of the metrics in table 2 could be used to justify the effect of topical information. 3) There is no analysis about the model behavior, therefore there is no way we could get a sense about how the model actually works. One possible analysis is to investigate the values and the corresponding words, which to some extent will tell us how the topical information be used in generation. In addition, it could be even better if there are some analysis about topics extracted by this model. This paper also doesn*t pay much attention to the existing work on topic-driven conversational modeling. For example "Topic Aware Neural Response Generation" from Xing et al., 2017. Some additional issues: 1) In the second line under equation 4, y_{t-1} -> y_{t} 2) In the first paragraph of section 3, two "MLP"*s are confusing 3) In the first paragraph of page 6, words with "highest inverse document frequency" are used as stop words? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLatent Topic Conversational Models\nDespite much success in many large-scale language tasks, sequence-to-sequence (seq2seq) models have not been an ideal choice for conversational modeling as they tend to generate generic and repetitive responses. In this paper, we propose a Latent Topic Conversational Model (LTCM) that augments the seq2seq model with a neural topic component to better model human-human conversations. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model to improve generation at each time step. The experimental results show that the proposed LTCM can generate more diverse and interesting responses by sampling from its learnt latent representations. In a subjective human evaluation, the judges also confirm that LTCM is the preferred option comparing to competitive baseline models.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes a conversational model with topical information, by combining seq2seq model with neural topic models. The experiments and human evaluation show the model outperform some the baseline model seq2seq and the other latent variable model variant of seq2seq. The paper is interesting, but it also has certain limitations: 1) To my understanding, it is a straightforward combination of seq2seq and one of the neural topic models without any justification. 2) The evaluation doesn*t show how the topic information could influence word generation. No of the metrics in table 2 could be used to justify the effect of topical information. 3) There is no analysis about the model behavior, therefore there is no way we could get a sense about how the model actually works. One possible analysis is to investigate the values and the corresponding words, which to some extent will tell us how the topical information be used in generation. In addition, it could be even better if there are some analysis about topics extracted by this model. This paper also doesn*t pay much attention to the existing work on topic-driven conversational modeling. For example \"Topic Aware Neural Response Generation\" from Xing et al., 2017. Some additional issues: 1) In the second line under equation 4, y_{t-1} -> y_{t} 2) In the first paragraph of section 3, two \"MLP\"*s are confusing 3) In the first paragraph of page 6, words with \"highest inverse document frequency\" are used as stop words?",
"role": "assistant"
}
]
| [
" The paper proposes a conversational model with topical information, by combining seq2seq model with neural topic models.",
"The experiments and human evaluation show the model outperform some the baseline model seq2seq and the other latent variable model variant of seq2seq.",
"The paper is interesting, but it also has certain limitations: 1) To my understanding, it is a straightforward combination of seq2seq and one of the neural topic models without any justification.",
"2) The evaluation doesn*t show how the topic information could influence word generation.",
"No of the metrics in table 2 could be used to justify the effect of topical information.",
"3) There is no analysis about the model behavior, therefore there is no way we could get a sense about how the model actually works.",
"One possible analysis is to investigate the values and the corresponding words, which to some extent will tell us how the topical information be used in generation.",
"In addition, it could be even better if there are some analysis about topics extracted by this model.",
"This paper also doesn*t pay much attention to the existing work on topic-driven conversational modeling.",
"For example \"Topic Aware Neural Response Generation\" from Xing et al., 2017.",
"Some additional issues: 1) In the second line under equation 4, y_{t-1} -> y_{t} 2) In the first paragraph of section 3, two \"MLP\"*s are confusing 3) In the first paragraph of page 6, words with \"highest inverse document frequency\" are used as stop words?"
] | {
"criticism": 5,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 1,
"presentation_and_reporting": 3,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 11
} | 0.454545 | 0.181818 | 0.181818 | 0.818182 | 0.090909 | 0.272727 | 0 | 0.181818 | 11 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 2.181818 | 2.118688 | 0.06313 |
Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient | Deep neural networks provide state-of-the-art performance for many applications of interest. Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack against the deployed black-box systems. In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability. Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient (NRG) which approximates the data-dependent component. Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically. We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks. | The problem of exploring the cross-model (and cross-dataset) generalization of adversarial examples is relatively neglected topic. However the paper*s list of related work on that toopic is a bit lacking as in section 3.1 it omits referencing the "Explaining and Harnessing..." paper by Goodfellow et al., which presented the first convincing attempt at explaining cross-model generalization of the examples. However this paper seems to extend the explanation by a more principled study of the cross-model generalization. Again Section 3.1. presents a hypothesis on splitting the space of adversarial perturbations into two sub-manifolds. However this hypothesis seems as a tautology as the splitting is engineered in a way to formally describe the informal statement. Anyways, the paper introduces a useful terminology to aid analysis and engineer examples with improved generalization across models. In the same vain, Section 3.2 presents another hypothesis, but is claimed as fact. It claims that the model-dependent component of adversarial examples is dominated by images with high-frequency noise. This is a relatively unfounded statement, not backed up by any qualitative or quantitative evidence. Motivated by the observation that most newly generated adversarial examples are perturbations by a high frequency noise and that noise is often model-specific (which is not measured or studied sufficiently in the paper), the paper suggests adding a noise term to the FGS and IGSM methods and give extensive experimental evidence on a variety of models on ImageNet demonstrating that the transferability of the newly generated examples is improved. I am on the fence with this paper. It certainly studies an important, somewhat neglected aspect of adversarial examples, but mostly speculatively and the experimental results study the resulting algorithm rather than trying trying the verify the hypotheses on which those algorithms are based upon. On the plus side the paper presents very strong practical evidence that the transferability of the examples can be enhanced by such a simple methodology significantly. I think the paper would be much more compelling (are should be accepted) if it contained a more disciplined study on the hypotheses on which the methodology is based upon. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnhancing the Transferability of Adversarial Examples with Noise Reduced Gradient\nDeep neural networks provide state-of-the-art performance for many applications of interest. Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack against the deployed black-box systems. In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability. Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient (NRG) which approximates the data-dependent component. Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically. We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The problem of exploring the cross-model (and cross-dataset) generalization of adversarial examples is relatively neglected topic. However the paper*s list of related work on that toopic is a bit lacking as in section 3.1 it omits referencing the \"Explaining and Harnessing...\" paper by Goodfellow et al., which presented the first convincing attempt at explaining cross-model generalization of the examples. However this paper seems to extend the explanation by a more principled study of the cross-model generalization. Again Section 3.1. presents a hypothesis on splitting the space of adversarial perturbations into two sub-manifolds. However this hypothesis seems as a tautology as the splitting is engineered in a way to formally describe the informal statement. Anyways, the paper introduces a useful terminology to aid analysis and engineer examples with improved generalization across models. In the same vain, Section 3.2 presents another hypothesis, but is claimed as fact. It claims that the model-dependent component of adversarial examples is dominated by images with high-frequency noise. This is a relatively unfounded statement, not backed up by any qualitative or quantitative evidence. Motivated by the observation that most newly generated adversarial examples are perturbations by a high frequency noise and that noise is often model-specific (which is not measured or studied sufficiently in the paper), the paper suggests adding a noise term to the FGS and IGSM methods and give extensive experimental evidence on a variety of models on ImageNet demonstrating that the transferability of the newly generated examples is improved. I am on the fence with this paper. It certainly studies an important, somewhat neglected aspect of adversarial examples, but mostly speculatively and the experimental results study the resulting algorithm rather than trying trying the verify the hypotheses on which those algorithms are based upon. On the plus side the paper presents very strong practical evidence that the transferability of the examples can be enhanced by such a simple methodology significantly. I think the paper would be much more compelling (are should be accepted) if it contained a more disciplined study on the hypotheses on which the methodology is based upon.",
"role": "assistant"
}
]
| [
" The problem of exploring the cross-model (and cross-dataset) generalization of adversarial examples is relatively neglected topic.",
"However the paper*s list of related work on that toopic is a bit lacking as in section 3.1 it omits referencing the \"Explaining and Harnessing...\" paper by Goodfellow et al., which presented the first convincing attempt at explaining cross-model generalization of the examples.",
"However this paper seems to extend the explanation by a more principled study of the cross-model generalization.",
"Again Section 3.1. presents a hypothesis on splitting the space of adversarial perturbations into two sub-manifolds.",
"However this hypothesis seems as a tautology as the splitting is engineered in a way to formally describe the informal statement.",
"Anyways, the paper introduces a useful terminology to aid analysis and engineer examples with improved generalization across models.",
"In the same vain, Section 3.2 presents another hypothesis, but is claimed as fact.",
"It claims that the model-dependent component of adversarial examples is dominated by images with high-frequency noise.",
"This is a relatively unfounded statement, not backed up by any qualitative or quantitative evidence.",
"Motivated by the observation that most newly generated adversarial examples are perturbations by a high frequency noise and that noise is often model-specific (which is not measured or studied sufficiently in the paper), the paper suggests adding a noise term to the FGS and IGSM methods and give extensive experimental evidence on a variety of models on ImageNet demonstrating that the transferability of the newly generated examples is improved.",
"I am on the fence with this paper.",
"It certainly studies an important, somewhat neglected aspect of adversarial examples, but mostly speculatively and the experimental results study the resulting algorithm rather than trying trying the verify the hypotheses on which those algorithms are based upon.",
"On the plus side the paper presents very strong practical evidence that the transferability of the examples can be enhanced by such a simple methodology significantly.",
"I think the paper would be much more compelling (are should be accepted) if it contained a more disciplined study on the hypotheses on which the methodology is based upon."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 6,
"materials_and_methods": 10,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 14
} | 0.285714 | 0 | 0.428571 | 0.714286 | 0.214286 | 0.214286 | 0.142857 | 0.142857 | 14 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 2.127075 | 0.015782 |
Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient | Deep neural networks provide state-of-the-art performance for many applications of interest. Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack against the deployed black-box systems. In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability. Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient (NRG) which approximates the data-dependent component. Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically. We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks. | This paper postulates that an adversarial perturbation consists of a model-specific and data-specific component, and that amplification of the latter is best suited for adversarial attacks. This paper has many grammatical errors. The article is almost always missing from nouns. Some of the sentences need changing. For example: "training model paramater" --> "training model parameters" (assuming the neural networks have more than 1 parameter) "same or similar dataset with" --> "same or a similar dataset to" "human eyes" --> "the human eye"! "in analogous to" --> "analogous to" "start-of-the-art" --> "state-of-the-art" Some roughly chronological comments follow: In equation (1) although it is obvious that y is the output of f, you should define it. As you are considering the single highest-scoring class, there should probably be an argmax somewhere. "The best metric should be human eyes, which is unfortunately difficult to quantify". I don*t recommend that you quantify things in terms of eyes. In section 3.1 I am not convinced there is yet sufficient justification to claim that grad(f||)^A is aligned with the inter-class deviation. It would be helpful to put equation (8) here. The "human" line on figure 1a doesn*t make much sense. By u & v in the figure 1 caption you presumably the x and y axes on the plot. These should be labelled. In section 4 you write "it is meaningless to construct adversarial perturbations for the images that target models cannot classify correctly". I*m not sure this is true. Imagenet has a *lot* of dog breeds. For an adversarial attack, it may be advantageous to change the classification from "wrong breed of dog" to "not a dog at all". Something that concerns me is that, although your methods produce good results, it looks like the hyperparameters are chosen so as to overfit to the data (please do correct me if this is not the case). A better procedure would be to split the imagenet validation set in two and optimise the hyperparameters on one split, and test on the second. You also "try lots of .alphas", which again seems like overfitting. Target attack experiments are missing from 5.1, in 5.2 you write that it is a harder problem so it is omitted. I would argue it is still worth presenting these results even if they are less flattering. Section 6.2 feels out of place and disjointed from the narrative of the paper. A lot of choices in Section 6 feel arbitrary. In 6.3, why is resnet34 the chosen source model? In 6.4 why do you select those two target models? I think this paper contains an interesting idea, but suffers from poor writing and unprincipled experimentation. I therefore recommend it be rejected. Pros: - Promising results - Good summary of adversarial methods Cons: - Poorly written - Appears to overfit to the test data | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnhancing the Transferability of Adversarial Examples with Noise Reduced Gradient\nDeep neural networks provide state-of-the-art performance for many applications of interest. Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack against the deployed black-box systems. In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability. Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient (NRG) which approximates the data-dependent component. Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically. We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper postulates that an adversarial perturbation consists of a model-specific and data-specific component, and that amplification of the latter is best suited for adversarial attacks. This paper has many grammatical errors. The article is almost always missing from nouns. Some of the sentences need changing. For example: \"training model paramater\" --> \"training model parameters\" (assuming the neural networks have more than 1 parameter) \"same or similar dataset with\" --> \"same or a similar dataset to\" \"human eyes\" --> \"the human eye\"! \"in analogous to\" --> \"analogous to\" \"start-of-the-art\" --> \"state-of-the-art\" Some roughly chronological comments follow: In equation (1) although it is obvious that y is the output of f, you should define it. As you are considering the single highest-scoring class, there should probably be an argmax somewhere. \"The best metric should be human eyes, which is unfortunately difficult to quantify\". I don*t recommend that you quantify things in terms of eyes. In section 3.1 I am not convinced there is yet sufficient justification to claim that grad(f||)^A is aligned with the inter-class deviation. It would be helpful to put equation (8) here. The \"human\" line on figure 1a doesn*t make much sense. By u & v in the figure 1 caption you presumably the x and y axes on the plot. These should be labelled. In section 4 you write \"it is meaningless to construct adversarial perturbations for the images that target models cannot classify correctly\". I*m not sure this is true. Imagenet has a *lot* of dog breeds. For an adversarial attack, it may be advantageous to change the classification from \"wrong breed of dog\" to \"not a dog at all\". Something that concerns me is that, although your methods produce good results, it looks like the hyperparameters are chosen so as to overfit to the data (please do correct me if this is not the case). A better procedure would be to split the imagenet validation set in two and optimise the hyperparameters on one split, and test on the second. You also \"try lots of .alphas\", which again seems like overfitting. Target attack experiments are missing from 5.1, in 5.2 you write that it is a harder problem so it is omitted. I would argue it is still worth presenting these results even if they are less flattering. Section 6.2 feels out of place and disjointed from the narrative of the paper. A lot of choices in Section 6 feel arbitrary. In 6.3, why is resnet34 the chosen source model? In 6.4 why do you select those two target models? I think this paper contains an interesting idea, but suffers from poor writing and unprincipled experimentation. I therefore recommend it be rejected. Pros: - Promising results - Good summary of adversarial methods Cons: - Poorly written - Appears to overfit to the test data",
"role": "assistant"
}
]
| [
" This paper postulates that an adversarial perturbation consists of a model-specific and data-specific component, and that amplification of the latter is best suited for adversarial attacks.",
"This paper has many grammatical errors.",
"The article is almost always missing from nouns.",
"Some of the sentences need changing.",
"For example: \"training model paramater\" --> \"training model parameters\" (assuming the neural networks have more than 1 parameter) \"same or similar dataset with\" --> \"same or a similar dataset to\" \"human eyes\" --> \"the human eye\"!",
"\"in analogous to\" --> \"analogous to\" \"start-of-the-art\" --> \"state-of-the-art\" Some roughly chronological comments follow: In equation (1) although it is obvious that y is the output of f, you should define it.",
"As you are considering the single highest-scoring class, there should probably be an argmax somewhere.",
"\"The best metric should be human eyes, which is unfortunately difficult to quantify\".",
"I don*t recommend that you quantify things in terms of eyes.",
"In section 3.1 I am not convinced there is yet sufficient justification to claim that grad(f||)^A is aligned with the inter-class deviation.",
"It would be helpful to put equation (8) here.",
"The \"human\" line on figure 1a doesn*t make much sense.",
"By u & v in the figure 1 caption you presumably the x and y axes on the plot.",
"These should be labelled.",
"In section 4 you write \"it is meaningless to construct adversarial perturbations for the images that target models cannot classify correctly\".",
"I*m not sure this is true.",
"Imagenet has a *lot* of dog breeds.",
"For an adversarial attack, it may be advantageous to change the classification from \"wrong breed of dog\" to \"not a dog at all\".",
"Something that concerns me is that, although your methods produce good results, it looks like the hyperparameters are chosen so as to overfit to the data (please do correct me if this is not the case).",
"A better procedure would be to split the imagenet validation set in two and optimise the hyperparameters on one split, and test on the second.",
"You also \"try lots of .alphas\", which again seems like overfitting.",
"Target attack experiments are missing from 5.1, in 5.2 you write that it is a harder problem so it is omitted.",
"I would argue it is still worth presenting these results even if they are less flattering.",
"Section 6.2 feels out of place and disjointed from the narrative of the paper.",
"A lot of choices in Section 6 feel arbitrary.",
"In 6.3, why is resnet34 the chosen source model?",
"In 6.4 why do you select those two target models?",
"I think this paper contains an interesting idea, but suffers from poor writing and unprincipled experimentation.",
"I therefore recommend it be rejected.",
"Pros: - Promising results - Good summary of adversarial methods Cons: - Poorly written - Appears to overfit to the test data"
] | {
"criticism": 13,
"example": 6,
"importance_and_relevance": 1,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 15,
"results_and_discussion": 4,
"suggestion_and_solution": 12,
"total": 30
} | 0.433333 | 0.2 | 0.033333 | 0.366667 | 0.1 | 0.5 | 0.133333 | 0.4 | 30 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.166667 | -2.394456 | 4.561122 |
The Implicit Bias of Gradient Descent on Separable Data | We show that gradient descent on an unregularized logistic regression problem, for almost all separable datasets, converges to the same direction as the max-margin solution. The result generalizes also to other monotone decreasing loss functions with an infimum at infinity, and we also discuss a multi-class generalizations to the cross entropy loss. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularization in more complex models and with other optimization methods. | Paper focuses on characterising behaviour of the log loss minimisation on the linearly separable data. As we know, optimisation like this does not converge in a strict mathematical sense, as the norm of the model will grow to infinity. However, one can still hope for a convergence of normalised solution (or equivalently - convergence in term of separator angle, rather than parametrisation). This paper shows that indeed, log-loss (and some other similar losses), minimised with gradient descent, leads to convergence (in the above sense) to the max-margin solution. On one hand it is an interesting property of model we train in practice, and on the other - provides nice link between two separate learning theories. Pros: - easy to follow line of argument - very interesting result of mapping "solution" of unregularised logistic regression (under gradient descent optimisation) onto hard max margin one Cons: - it is not clear in the abstract, and beginning of the paper what "convergence" means, as in the strict sense logistic regression optimisation never converges on separable data. It would be beneficial for the clarity if authors define what they mean by convergence (normalised weight vector, angle, whichever path seems most natural) as early in the paper as possible. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Implicit Bias of Gradient Descent on Separable Data\nWe show that gradient descent on an unregularized logistic regression problem, for almost all separable datasets, converges to the same direction as the max-margin solution. The result generalizes also to other monotone decreasing loss functions with an infimum at infinity, and we also discuss a multi-class generalizations to the cross entropy loss. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularization in more complex models and with other optimization methods.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Paper focuses on characterising behaviour of the log loss minimisation on the linearly separable data. As we know, optimisation like this does not converge in a strict mathematical sense, as the norm of the model will grow to infinity. However, one can still hope for a convergence of normalised solution (or equivalently - convergence in term of separator angle, rather than parametrisation). This paper shows that indeed, log-loss (and some other similar losses), minimised with gradient descent, leads to convergence (in the above sense) to the max-margin solution. On one hand it is an interesting property of model we train in practice, and on the other - provides nice link between two separate learning theories. Pros: - easy to follow line of argument - very interesting result of mapping \"solution\" of unregularised logistic regression (under gradient descent optimisation) onto hard max margin one Cons: - it is not clear in the abstract, and beginning of the paper what \"convergence\" means, as in the strict sense logistic regression optimisation never converges on separable data. It would be beneficial for the clarity if authors define what they mean by convergence (normalised weight vector, angle, whichever path seems most natural) as early in the paper as possible.",
"role": "assistant"
}
]
| [
" Paper focuses on characterising behaviour of the log loss minimisation on the linearly separable data.",
"As we know, optimisation like this does not converge in a strict mathematical sense, as the norm of the model will grow to infinity.",
"However, one can still hope for a convergence of normalised solution (or equivalently - convergence in term of separator angle, rather than parametrisation).",
"This paper shows that indeed, log-loss (and some other similar losses), minimised with gradient descent, leads to convergence (in the above sense) to the max-margin solution.",
"On one hand it is an interesting property of model we train in practice, and on the other - provides nice link between two separate learning theories.",
"Pros: - easy to follow line of argument - very interesting result of mapping \"solution\" of unregularised logistic regression (under gradient descent optimisation) onto hard max margin one Cons: - it is not clear in the abstract, and beginning of the paper what \"convergence\" means, as in the strict sense logistic regression optimisation never converges on separable data.",
"It would be beneficial for the clarity if authors define what they mean by convergence (normalised weight vector, angle, whichever path seems most natural) as early in the paper as possible."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 7
} | 0.142857 | 0 | 0.142857 | 0.714286 | 0.285714 | 0.142857 | 0.571429 | 0.142857 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset. | The authors perform a comprehensive validation of LSTM-based word and character language models, establishing that recent claims that other structures can consistently outperform the older stacked LSTM architecture result from failure to fully explore the hyperparameter space. Instead, with more thorough hyperparameter search, LSTMs are found to achieve state-of-the-art results on many of these language modeling tasks. This is a significant result in language modeling and a milestone in deep learning reproducibility research. The paper is clearly motivated and authoritative in its conclusions but it*s somewhat lacking in detailed model or experiment descriptions. Some further points: - There are several hyperparameters set to the "standard" or "default" value, like Adam*s beta parameter and the batch size/BPTT length. Even if it would be prohibitive to include them in the overall hyperparameter search, the community is curious about their effect and it would be interesting to hear if the authors* experience suggests that these choices are indeed reasonably well-justified. - The description of the model is ambiguous on at least two points. First, it wasn*t completely clear to me what the down-projection is (if it*s simply projecting down from the LSTM hidden size to the embedding size, it wouldn*t represent a hyperparameter the tuner can set, so I*m assuming it*s separate and prior to the conventional output projection). Second, the phrase "additive skip connections combining outputs of all layers" has a couple possible interpretations (e.g., skip connections that jump from each layer to the last layer or (my assumption) skip connections between every pair of layers?). - Fully evaluating the "claims of Collins et al. (2016), that capacities of various cells are very similar and their apparent differences result from trainability and regularisation" would likely involve adding a fourth cell to the hyperparameter sweep, one whose design is more arbitrary and is neither the result of human nor machine optimization. - The reformulation of the problem of deciding embedding and hidden sizes into one of allocating a fixed parameter budget towards the embedding and recurrent layers represents a significant conceptual step forward in understanding the causes of variation in model performance. - The plot in Figure 2 is clear and persuasive, but for reproducibility purposes it would also be nice to see an example set of strong hyperparameters in a table. The history of hyperparameter proposals and their perplexities would also make for a fantastic dataset for exploring the structure of RNN hyperparameter spaces. For instance, it would be helpful for future work to know which hyperparameters* effects are most nearly independent of other hyperparameters. - The choice between tied and clipped (Sak et al., 2014) LSTM gates, and their comparison to standard untied LSTM gates, is discussed only minimally, although it represents a significant difference between this paper and the most "standard" or "conventional" LSTM implementation (e.g., as provided in optimized GPU libraries). In addition to further discussion on this point, this result also suggests evaluating other recently proposed "minor changes" to the LSTM architecture such as multiplicative LSTM (Krause et al., 2016) - It would also have been nice to see a comparison between the variational/recurrent dropout parameterization "in which there is further sharing of masks between gates" and the one with "independent noise for the gates," as described in the footnote. There has been some confusion in the literature as to which of these parameterizations is better or more standard; simply justifying the choice of parameterization a little more would also help. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the State of the Art of Evaluation in Neural Language Models\nOngoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors perform a comprehensive validation of LSTM-based word and character language models, establishing that recent claims that other structures can consistently outperform the older stacked LSTM architecture result from failure to fully explore the hyperparameter space. Instead, with more thorough hyperparameter search, LSTMs are found to achieve state-of-the-art results on many of these language modeling tasks. This is a significant result in language modeling and a milestone in deep learning reproducibility research. The paper is clearly motivated and authoritative in its conclusions but it*s somewhat lacking in detailed model or experiment descriptions. Some further points: - There are several hyperparameters set to the \"standard\" or \"default\" value, like Adam*s beta parameter and the batch size/BPTT length. Even if it would be prohibitive to include them in the overall hyperparameter search, the community is curious about their effect and it would be interesting to hear if the authors* experience suggests that these choices are indeed reasonably well-justified. - The description of the model is ambiguous on at least two points. First, it wasn*t completely clear to me what the down-projection is (if it*s simply projecting down from the LSTM hidden size to the embedding size, it wouldn*t represent a hyperparameter the tuner can set, so I*m assuming it*s separate and prior to the conventional output projection). Second, the phrase \"additive skip connections combining outputs of all layers\" has a couple possible interpretations (e.g., skip connections that jump from each layer to the last layer or (my assumption) skip connections between every pair of layers?). - Fully evaluating the \"claims of Collins et al. (2016), that capacities of various cells are very similar and their apparent differences result from trainability and regularisation\" would likely involve adding a fourth cell to the hyperparameter sweep, one whose design is more arbitrary and is neither the result of human nor machine optimization. - The reformulation of the problem of deciding embedding and hidden sizes into one of allocating a fixed parameter budget towards the embedding and recurrent layers represents a significant conceptual step forward in understanding the causes of variation in model performance. - The plot in Figure 2 is clear and persuasive, but for reproducibility purposes it would also be nice to see an example set of strong hyperparameters in a table. The history of hyperparameter proposals and their perplexities would also make for a fantastic dataset for exploring the structure of RNN hyperparameter spaces. For instance, it would be helpful for future work to know which hyperparameters* effects are most nearly independent of other hyperparameters. - The choice between tied and clipped (Sak et al., 2014) LSTM gates, and their comparison to standard untied LSTM gates, is discussed only minimally, although it represents a significant difference between this paper and the most \"standard\" or \"conventional\" LSTM implementation (e.g., as provided in optimized GPU libraries). In addition to further discussion on this point, this result also suggests evaluating other recently proposed \"minor changes\" to the LSTM architecture such as multiplicative LSTM (Krause et al., 2016) - It would also have been nice to see a comparison between the variational/recurrent dropout parameterization \"in which there is further sharing of masks between gates\" and the one with \"independent noise for the gates,\" as described in the footnote. There has been some confusion in the literature as to which of these parameterizations is better or more standard; simply justifying the choice of parameterization a little more would also help.",
"role": "assistant"
}
]
| [
" The authors perform a comprehensive validation of LSTM-based word and character language models, establishing that recent claims that other structures can consistently outperform the older stacked LSTM architecture result from failure to fully explore the hyperparameter space.",
"Instead, with more thorough hyperparameter search, LSTMs are found to achieve state-of-the-art results on many of these language modeling tasks.",
"This is a significant result in language modeling and a milestone in deep learning reproducibility research.",
"The paper is clearly motivated and authoritative in its conclusions but it*s somewhat lacking in detailed model or experiment descriptions.",
"Some further points: - There are several hyperparameters set to the \"standard\" or \"default\" value, like Adam*s beta parameter and the batch size/BPTT length.",
"Even if it would be prohibitive to include them in the overall hyperparameter search, the community is curious about their effect and it would be interesting to hear if the authors* experience suggests that these choices are indeed reasonably well-justified.",
"- The description of the model is ambiguous on at least two points.",
"First, it wasn*t completely clear to me what the down-projection is (if it*s simply projecting down from the LSTM hidden size to the embedding size, it wouldn*t represent a hyperparameter the tuner can set, so I*m assuming it*s separate and prior to the conventional output projection).",
"Second, the phrase \"additive skip connections combining outputs of all layers\" has a couple possible interpretations (e.g., skip connections that jump from each layer to the last layer or (my assumption) skip connections between every pair of layers?).",
"- Fully evaluating the \"claims of Collins et al.",
"(2016), that capacities of various cells are very similar and their apparent differences result from trainability and regularisation\" would likely involve adding a fourth cell to the hyperparameter sweep, one whose design is more arbitrary and is neither the result of human nor machine optimization.",
"- The reformulation of the problem of deciding embedding and hidden sizes into one of allocating a fixed parameter budget towards the embedding and recurrent layers represents a significant conceptual step forward in understanding the causes of variation in model performance.",
"- The plot in Figure 2 is clear and persuasive, but for reproducibility purposes it would also be nice to see an example set of strong hyperparameters in a table.",
"The history of hyperparameter proposals and their perplexities would also make for a fantastic dataset for exploring the structure of RNN hyperparameter spaces.",
"For instance, it would be helpful for future work to know which hyperparameters* effects are most nearly independent of other hyperparameters.",
"- The choice between tied and clipped (Sak et al., 2014) LSTM gates, and their comparison to standard untied LSTM gates, is discussed only minimally, although it represents a significant difference between this paper and the most \"standard\" or \"conventional\" LSTM implementation (e.g., as provided in optimized GPU libraries).",
"In addition to further discussion on this point, this result also suggests evaluating other recently proposed \"minor changes\" to the LSTM architecture such as multiplicative LSTM (Krause et al., 2016) - It would also have been nice to see a comparison between the variational/recurrent dropout parameterization \"in which there is further sharing of masks between gates\" and the one with \"independent noise for the gates,\" as described in the footnote.",
"There has been some confusion in the literature as to which of these parameterizations is better or more standard; simply justifying the choice of parameterization a little more would also help."
] | {
"criticism": 3,
"example": 3,
"importance_and_relevance": 3,
"materials_and_methods": 14,
"praise": 5,
"presentation_and_reporting": 5,
"results_and_discussion": 7,
"suggestion_and_solution": 6,
"total": 18
} | 0.166667 | 0.166667 | 0.166667 | 0.777778 | 0.277778 | 0.277778 | 0.388889 | 0.333333 | 18 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.555556 | 2.160995 | 0.394561 |
On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset. | The authors did extensive tuning of the parameters for several recurrent neural architectures. The results are interesting. However the corpus the authors choose are quite small, the variance of the estimate will be quite high, I suspect whether the same conclusions could be drawn. It would be more convincing if there are experiments on the billion word corpus or other larger datasets, or at least on a corpus with 50 million tokens. This will use significant resources and is much more difficult, but it*s also really valuable, because it*s much more close to real world usage of language models. And less tuning is needed for these larger datasets. Finally it*s better to do some experiments on machine translation or speech recognition and see how the improvement on BLEU or WER could get. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the State of the Art of Evaluation in Neural Language Models\nOngoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors did extensive tuning of the parameters for several recurrent neural architectures. The results are interesting. However the corpus the authors choose are quite small, the variance of the estimate will be quite high, I suspect whether the same conclusions could be drawn. It would be more convincing if there are experiments on the billion word corpus or other larger datasets, or at least on a corpus with 50 million tokens. This will use significant resources and is much more difficult, but it*s also really valuable, because it*s much more close to real world usage of language models. And less tuning is needed for these larger datasets. Finally it*s better to do some experiments on machine translation or speech recognition and see how the improvement on BLEU or WER could get.",
"role": "assistant"
}
]
| [
" The authors did extensive tuning of the parameters for several recurrent neural architectures.",
"The results are interesting.",
"However the corpus the authors choose are quite small, the variance of the estimate will be quite high, I suspect whether the same conclusions could be drawn.",
"It would be more convincing if there are experiments on the billion word corpus or other larger datasets, or at least on a corpus with 50 million tokens.",
"This will use significant resources and is much more difficult, but it*s also really valuable, because it*s much more close to real world usage of language models.",
"And less tuning is needed for these larger datasets.",
"Finally it*s better to do some experiments on machine translation or speech recognition and see how the improvement on BLEU or WER could get."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 7
} | 0 | 0 | 0.285714 | 0.857143 | 0.142857 | 0.285714 | 0.285714 | 0.428571 | 7 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.285714 | 1.717547 | 0.568167 |
On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset. | The submitted manuscript describes an exercise in performance comparison for neural language models under standardization of the hyperparameter tuning and model selection strategies and costs. This type of study is important to give perspective to non-standardized performance scores reported across separate publications, and indeed the results here are interesting as they favour relatively simpler structures. I have a favourable impression of this paper but would hope another reviewer is more familiar with the specific application domain than I am. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the State of the Art of Evaluation in Neural Language Models\nOngoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The submitted manuscript describes an exercise in performance comparison for neural language models under standardization of the hyperparameter tuning and model selection strategies and costs. This type of study is important to give perspective to non-standardized performance scores reported across separate publications, and indeed the results here are interesting as they favour relatively simpler structures. I have a favourable impression of this paper but would hope another reviewer is more familiar with the specific application domain than I am.",
"role": "assistant"
}
]
| [
" The submitted manuscript describes an exercise in performance comparison for neural language models under standardization of the hyperparameter tuning and model selection strategies and costs.",
"This type of study is important to give perspective to non-standardized performance scores reported across separate publications, and indeed the results here are interesting as they favour relatively simpler structures.",
"I have a favourable impression of this paper but would hope another reviewer is more familiar with the specific application domain than I am."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0 | 0 | 0.333333 | 0.666667 | 0.666667 | 0 | 0.333333 | 0.333333 | 3 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.333333 | 0.75509 | 1.578243 |
Decoupling the Layers in Residual Networks | We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. We apply a perturbation theory on residual networks and decouple the interactions between residual units. The resulting warp operator is a first order approximation of the output over multiple layers. The first order perturbation theory exhibits properties such as binomial path lengths and exponential gradient scaling found experimentally by Veit et al (2016). We demonstrate through an extensive performance study that the proposed network achieves comparable predictive performance to the original residual network with the same number of parameters, while achieving a significant speed-up on the total training time. As WarpNet performs model parallelism in residual network training in which weights are distributed over different GPUs, it offers speed-up and capability to train larger networks compared to original residual networks. | Paper proposes a shallow model for approximating stacks of Resnet layers, based on mathematical approximations to the Resnet equations and experimental insights, and uses this technique to train Resnet-like models in half the time on CIFAR-10 and CIFAR-100. While the experiments are not particularly impressive, I liked the originality of this paper. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDecoupling the Layers in Residual Networks\nWe propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. We apply a perturbation theory on residual networks and decouple the interactions between residual units. The resulting warp operator is a first order approximation of the output over multiple layers. The first order perturbation theory exhibits properties such as binomial path lengths and exponential gradient scaling found experimentally by Veit et al (2016). We demonstrate through an extensive performance study that the proposed network achieves comparable predictive performance to the original residual network with the same number of parameters, while achieving a significant speed-up on the total training time. As WarpNet performs model parallelism in residual network training in which weights are distributed over different GPUs, it offers speed-up and capability to train larger networks compared to original residual networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Paper proposes a shallow model for approximating stacks of Resnet layers, based on mathematical approximations to the Resnet equations and experimental insights, and uses this technique to train Resnet-like models in half the time on CIFAR-10 and CIFAR-100. While the experiments are not particularly impressive, I liked the originality of this paper.",
"role": "assistant"
}
]
| [
" Paper proposes a shallow model for approximating stacks of Resnet layers, based on mathematical approximations to the Resnet equations and experimental insights, and uses this technique to train Resnet-like models in half the time on CIFAR-10 and CIFAR-100.",
"While the experiments are not particularly impressive, I liked the originality of this paper."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 2
} | 0.5 | 0 | 0.5 | 1 | 0.5 | 0 | 0 | 0 | 2 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 2.5 | 0.590326 | 1.909674 |
Interactive Boosting of Neural Networks for Small-sample Image Classification | Neural networks have recently shown excellent performance on numerous classi- fication tasks. These networks often have a large number of parameters and thus require much data to train. When the number of training data points is small, however, a network with high flexibility will quickly overfit the training data, resulting in a large model variance and a poor generalization performance. To address this problem, we propose a new ensemble learning method called InterBoost for small-sample image classification. In the training phase, InterBoost first randomly generates two complementary datasets to train two base networks of the same structure, separately, and then next two complementary datasets for further training the networks are generated through interaction (or information sharing) between the two base networks trained previously. This interactive training process continues iteratively until a stop criterion is met. In the testing phase, the outputs of the two networks are combined to obtain one final score for classification. Detailed analysis of the method is provided for an in-depth understanding of its mechanism. | This paper proposes interactive boosting to train two complimentary networks for relative small image classification. This is an interesting idea for small sample training, although I am not sure whether it can be applied to "really" small sample datasets since UIUC or LabelMe contains at least thousands of images. The experimental results are not very convincing, and I wish the final paper could compare with more baselines, including 1. adapting pretrained model (e.g., from ImageNet) to the proposed problem 2. deep CNN with dropout 3. deep CNN with distillation 4. deep CNN with data augmentation which all are popular techniques to apply deep network for small sample problems. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInteractive Boosting of Neural Networks for Small-sample Image Classification\nNeural networks have recently shown excellent performance on numerous classi- fication tasks. These networks often have a large number of parameters and thus require much data to train. When the number of training data points is small, however, a network with high flexibility will quickly overfit the training data, resulting in a large model variance and a poor generalization performance. To address this problem, we propose a new ensemble learning method called InterBoost for small-sample image classification. In the training phase, InterBoost first randomly generates two complementary datasets to train two base networks of the same structure, separately, and then next two complementary datasets for further training the networks are generated through interaction (or information sharing) between the two base networks trained previously. This interactive training process continues iteratively until a stop criterion is met. In the testing phase, the outputs of the two networks are combined to obtain one final score for classification. Detailed analysis of the method is provided for an in-depth understanding of its mechanism.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes interactive boosting to train two complimentary networks for relative small image classification. This is an interesting idea for small sample training, although I am not sure whether it can be applied to \"really\" small sample datasets since UIUC or LabelMe contains at least thousands of images. The experimental results are not very convincing, and I wish the final paper could compare with more baselines, including 1. adapting pretrained model (e.g., from ImageNet) to the proposed problem 2. deep CNN with dropout 3. deep CNN with distillation 4. deep CNN with data augmentation which all are popular techniques to apply deep network for small sample problems.",
"role": "assistant"
}
]
| [
" This paper proposes interactive boosting to train two complimentary networks for relative small image classification.",
"This is an interesting idea for small sample training, although I am not sure whether it can be applied to \"really\" small sample datasets since UIUC or LabelMe contains at least thousands of images.",
"The experimental results are not very convincing, and I wish the final paper could compare with more baselines, including 1. adapting pretrained model (e.g., from ImageNet) to the proposed problem 2. deep CNN with dropout 3. deep CNN with distillation 4. deep CNN with data augmentation which all are popular techniques to apply deep network for small sample problems."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0.666667 | 0 | 0.333333 | 1 | 0.333333 | 0.333333 | 0.333333 | 0.333333 | 3 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3.333333 | 1.75509 | 1.578243 |
A dynamic game approach to training robust deep policies | We present a method for evaluating the sensitivity of deep reinforcement learning (RL) policies. We also formulate a zero-sum dynamic game for designing robust deep reinforcement learning policies. Our approach mitigates the brittleness of policies when agents are trained in a simulated environment and are later exposed to the real world where it is hazardous to employ RL policies. This framework for training deep RL policies involve a zero-sum dynamic game against an adversarial agent, where the goal is to drive the system dynamics to a saddle region. Using a variant of the guided policy search algorithm, our agent learns to adopt robust policies that require less samples for learning the dynamics and performs better than the GPS algorithm. Without loss of generality, we demonstrate that deep RL policies trained in this fashion will be maximally robust to a ``worst" possible adversarial disturbances. | The paper presents a method for evaluating the sensitivity and robustness of deep RL policies, and proposes a dynamic game approach for learning robust policies. The paper oversells the approach in many ways. The authors claim that "experiments confirm that state-of-the-art reinforcement learning algorithms fail in the presence of additive disturbances, making them brittle when used in situations that call for robustness". However, their methods and experiments are only applied to Guided Policy Search (GPS), which seems like a specialized RL algorithm. Conclusions drawn from empirically running GPS on a problem cannot be generalized to all "state-of-the-art RL algorithms". In Fig 3, the authors conclude that "our algorithm uses lesser number for the GMMs and requires fewer samples to generalize to the real-world". I*m not sure how this can be concluded from Fig 3 [LEFT]. The two line graphs for different values of gamma almost overlay each other, and the cost seems to go up and down, even with number of samples on a log scale. If this shows the variance in the procedure, then the authors should run enough repeats of the experient to smooth out the variance and show the true signal (with error bars if possible). All related conclusions with regards to the dynamic game achieving higher sample efficiency for GMM dynamics fitting need to be backed up with better experimental data (or perhaps clearer presentation, if such data already exists). Figures 2 and 3 talk about optimal adversarial costs. The precise mathematical definition of this term should be clarified somewhere, since there are several cost functions described in the paper, and it*s unclear which terms are actually being plotted here. The structure of the global policies used in the experiments should be mentioned somewhere. Note about anonymity: Citation [21] breaks anonymity, since it*s referred to in the text as "our abstract". The link to the YouTube video breaks author anonymity. Further, the link to a shared dropbox folder breaks reviewer anonymity, hence I have not watched those videos. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA dynamic game approach to training robust deep policies\nWe present a method for evaluating the sensitivity of deep reinforcement learning (RL) policies. We also formulate a zero-sum dynamic game for designing robust deep reinforcement learning policies. Our approach mitigates the brittleness of policies when agents are trained in a simulated environment and are later exposed to the real world where it is hazardous to employ RL policies. This framework for training deep RL policies involve a zero-sum dynamic game against an adversarial agent, where the goal is to drive the system dynamics to a saddle region. Using a variant of the guided policy search algorithm, our agent learns to adopt robust policies that require less samples for learning the dynamics and performs better than the GPS algorithm. Without loss of generality, we demonstrate that deep RL policies trained in this fashion will be maximally robust to a ``worst\" possible adversarial disturbances.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper presents a method for evaluating the sensitivity and robustness of deep RL policies, and proposes a dynamic game approach for learning robust policies. The paper oversells the approach in many ways. The authors claim that \"experiments confirm that state-of-the-art reinforcement learning algorithms fail in the presence of additive disturbances, making them brittle when used in situations that call for robustness\". However, their methods and experiments are only applied to Guided Policy Search (GPS), which seems like a specialized RL algorithm. Conclusions drawn from empirically running GPS on a problem cannot be generalized to all \"state-of-the-art RL algorithms\". In Fig 3, the authors conclude that \"our algorithm uses lesser number for the GMMs and requires fewer samples to generalize to the real-world\". I*m not sure how this can be concluded from Fig 3 [LEFT]. The two line graphs for different values of gamma almost overlay each other, and the cost seems to go up and down, even with number of samples on a log scale. If this shows the variance in the procedure, then the authors should run enough repeats of the experient to smooth out the variance and show the true signal (with error bars if possible). All related conclusions with regards to the dynamic game achieving higher sample efficiency for GMM dynamics fitting need to be backed up with better experimental data (or perhaps clearer presentation, if such data already exists). Figures 2 and 3 talk about optimal adversarial costs. The precise mathematical definition of this term should be clarified somewhere, since there are several cost functions described in the paper, and it*s unclear which terms are actually being plotted here. The structure of the global policies used in the experiments should be mentioned somewhere. Note about anonymity: Citation [21] breaks anonymity, since it*s referred to in the text as \"our abstract\". The link to the YouTube video breaks author anonymity. Further, the link to a shared dropbox folder breaks reviewer anonymity, hence I have not watched those videos.",
"role": "assistant"
}
]
| [
" The paper presents a method for evaluating the sensitivity and robustness of deep RL policies, and proposes a dynamic game approach for learning robust policies.",
"The paper oversells the approach in many ways.",
"The authors claim that \"experiments confirm that state-of-the-art reinforcement learning algorithms fail in the presence of additive disturbances, making them brittle when used in situations that call for robustness\".",
"However, their methods and experiments are only applied to Guided Policy Search (GPS), which seems like a specialized RL algorithm.",
"Conclusions drawn from empirically running GPS on a problem cannot be generalized to all \"state-of-the-art RL algorithms\".",
"In Fig 3, the authors conclude that \"our algorithm uses lesser number for the GMMs and requires fewer samples to generalize to the real-world\".",
"I*m not sure how this can be concluded from Fig 3 [LEFT].",
"The two line graphs for different values of gamma almost overlay each other, and the cost seems to go up and down, even with number of samples on a log scale.",
"If this shows the variance in the procedure, then the authors should run enough repeats of the experient to smooth out the variance and show the true signal (with error bars if possible).",
"All related conclusions with regards to the dynamic game achieving higher sample efficiency for GMM dynamics fitting need to be backed up with better experimental data (or perhaps clearer presentation, if such data already exists).",
"Figures 2 and 3 talk about optimal adversarial costs.",
"The precise mathematical definition of this term should be clarified somewhere, since there are several cost functions described in the paper, and it*s unclear which terms are actually being plotted here.",
"The structure of the global policies used in the experiments should be mentioned somewhere.",
"Note about anonymity: Citation [21] breaks anonymity, since it*s referred to in the text as \"our abstract\".",
"The link to the YouTube video breaks author anonymity.",
"Further, the link to a shared dropbox folder breaks reviewer anonymity, hence I have not watched those videos."
] | {
"criticism": 5,
"example": 3,
"importance_and_relevance": 1,
"materials_and_methods": 10,
"praise": 1,
"presentation_and_reporting": 5,
"results_and_discussion": 8,
"suggestion_and_solution": 4,
"total": 16
} | 0.3125 | 0.1875 | 0.0625 | 0.625 | 0.0625 | 0.3125 | 0.5 | 0.25 | 16 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.3125 | 2.170458 | 0.142042 |
A Scalable Laplace Approximation for Neural Networks | We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture. | This paper uses recent progress in the understanding and approximation of curvature matrices in neural networks to revisit a venerable area- that of Laplace approximations to neural network posteriors. The Laplace method requires two stages - 1) obtaining a point estimate of the parameters followed by 2) estimation of the curvature. Since 1) is close to common practice it raises the appealing possibility of adding 2) after the fact, although the prior may be difficult to interpret in this case. A pitfall is that the method needs the point estimate to fall in a locally quadratic bowl or to add regularisation to make this true. The necessary amount of regularisation can be large as reported in section 5.4. The paper is generally well written. In particular the mathematical exposition attains good clarity. Much of the mathematical treatment of the curvature was already discussed by Martens and Grosse and Botev et al in previous works. The paper is generally well referenced. Given the complexity of the method, I think it would have helped to submit the code in anonymized form at this point.There are also some experiments not there that would improve the contribution. Figure 1 should include a comparison to Hamiltonian Monte Carlo and the full Laplace approximation (It is not sufficient to point to experiments in Hernandez-Lobato and Adams 2015 with a different model/prior). The size of model and data would not be prohibitive for either of these methods in this instance. All that figure 1 shows at the moment is that the proposed approximation has smaller predictive variance than the fully diagonal variant of the method. It would be interesting (but perhaps not essential) to compare the Laplace approximation to other scalable methods from the literature such as that of Louizos and Welling 2016 which uses also used matrix normal distributions. It is good that the paper includes a modern architecture with a more challenging dataset. It is a shame the method does not work better in this instance but the authors should not be penalized for reporting this. I think a paper on a probabilistic method should at some point evaluate log likelihood in a case where the test distribution is the same as the training distribution. This complements experiments where there is dataset shift and we wish to show robustness. I would be very interested to know how useful the implied marginal likelihoods of the approximation where, as suggested for further work in the conclusion. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Scalable Laplace Approximation for Neural Networks\nWe leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper uses recent progress in the understanding and approximation of curvature matrices in neural networks to revisit a venerable area- that of Laplace approximations to neural network posteriors. The Laplace method requires two stages - 1) obtaining a point estimate of the parameters followed by 2) estimation of the curvature. Since 1) is close to common practice it raises the appealing possibility of adding 2) after the fact, although the prior may be difficult to interpret in this case. A pitfall is that the method needs the point estimate to fall in a locally quadratic bowl or to add regularisation to make this true. The necessary amount of regularisation can be large as reported in section 5.4. The paper is generally well written. In particular the mathematical exposition attains good clarity. Much of the mathematical treatment of the curvature was already discussed by Martens and Grosse and Botev et al in previous works. The paper is generally well referenced. Given the complexity of the method, I think it would have helped to submit the code in anonymized form at this point.There are also some experiments not there that would improve the contribution. Figure 1 should include a comparison to Hamiltonian Monte Carlo and the full Laplace approximation (It is not sufficient to point to experiments in Hernandez-Lobato and Adams 2015 with a different model/prior). The size of model and data would not be prohibitive for either of these methods in this instance. All that figure 1 shows at the moment is that the proposed approximation has smaller predictive variance than the fully diagonal variant of the method. It would be interesting (but perhaps not essential) to compare the Laplace approximation to other scalable methods from the literature such as that of Louizos and Welling 2016 which uses also used matrix normal distributions. It is good that the paper includes a modern architecture with a more challenging dataset. It is a shame the method does not work better in this instance but the authors should not be penalized for reporting this. I think a paper on a probabilistic method should at some point evaluate log likelihood in a case where the test distribution is the same as the training distribution. This complements experiments where there is dataset shift and we wish to show robustness. I would be very interested to know how useful the implied marginal likelihoods of the approximation where, as suggested for further work in the conclusion.",
"role": "assistant"
}
]
| [
" This paper uses recent progress in the understanding and approximation of curvature matrices in neural networks to revisit a venerable area- that of Laplace approximations to neural network posteriors.",
"The Laplace method requires two stages - 1) obtaining a point estimate of the parameters followed by 2) estimation of the curvature.",
"Since 1) is close to common practice it raises the appealing possibility of adding 2) after the fact, although the prior may be difficult to interpret in this case.",
"A pitfall is that the method needs the point estimate to fall in a locally quadratic bowl or to add regularisation to make this true.",
"The necessary amount of regularisation can be large as reported in section 5.4.",
"The paper is generally well written.",
"In particular the mathematical exposition attains good clarity.",
"Much of the mathematical treatment of the curvature was already discussed by Martens and Grosse and Botev et al in previous works.",
"The paper is generally well referenced.",
"Given the complexity of the method, I think it would have helped to submit the code in anonymized form at this point.There are also some experiments not there that would improve the contribution.",
"Figure 1 should include a comparison to Hamiltonian Monte Carlo and the full Laplace approximation (It is not sufficient to point to experiments in Hernandez-Lobato and Adams 2015 with a different model/prior).",
"The size of model and data would not be prohibitive for either of these methods in this instance.",
"All that figure 1 shows at the moment is that the proposed approximation has smaller predictive variance than the fully diagonal variant of the method.",
"It would be interesting (but perhaps not essential) to compare the Laplace approximation to other scalable methods from the literature such as that of Louizos and Welling 2016 which uses also used matrix normal distributions.",
"It is good that the paper includes a modern architecture with a more challenging dataset.",
"It is a shame the method does not work better in this instance but the authors should not be penalized for reporting this.",
"I think a paper on a probabilistic method should at some point evaluate log likelihood in a case where the test distribution is the same as the training distribution.",
"This complements experiments where there is dataset shift and we wish to show robustness.",
"I would be very interested to know how useful the implied marginal likelihoods of the approximation where, as suggested for further work in the conclusion."
] | {
"criticism": 2,
"example": 2,
"importance_and_relevance": 4,
"materials_and_methods": 13,
"praise": 5,
"presentation_and_reporting": 5,
"results_and_discussion": 4,
"suggestion_and_solution": 9,
"total": 19
} | 0.105263 | 0.105263 | 0.210526 | 0.684211 | 0.263158 | 0.263158 | 0.210526 | 0.473684 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.315789 | 1.747622 | 0.568167 |
A Scalable Laplace Approximation for Neural Networks | We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture. | This paper proposes a Laplace approximation to approximate the posterior distribution over the parameters of deep networks. The idea is interesting and the realization of the paper is good. The idea builds upon previous work in scalable Gauss-Newton methods for optimization in deep networks, notably Botev et al., ICML 2017. In this respect, I think that the novelty in the current submission is limited, as the approximation is essentially what proposed in Botev et al., ICML 2017. The Laplace approximation requires the Hessian of the posterior, so techniques developed for Gauss-Newton optimization can straightforwardly be applied to construct Laplace approximations. Having said that, the experimental evaluation is quite interesting and in-depth. I think it would have been interesting to report comparisons with factorized variational inference (Graves, 2011) as it is a fairly standard and widely adopted in Bayesian deep learning. This would have been an interesting way to support the claims on the poor approximation offered by standard variational inference. I believe that the independence assumption across layers is a limiting factor of the proposed approximation strategy. Intuitively, changes in the weights in a given layer should affect the weights in other layers, so I would expect the posterior distribution over all the weights to reflect this through correlations across layers. I wonder how these results can be generalized to relax the independence assumption. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Scalable Laplace Approximation for Neural Networks\nWe leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a Laplace approximation to approximate the posterior distribution over the parameters of deep networks. The idea is interesting and the realization of the paper is good. The idea builds upon previous work in scalable Gauss-Newton methods for optimization in deep networks, notably Botev et al., ICML 2017. In this respect, I think that the novelty in the current submission is limited, as the approximation is essentially what proposed in Botev et al., ICML 2017. The Laplace approximation requires the Hessian of the posterior, so techniques developed for Gauss-Newton optimization can straightforwardly be applied to construct Laplace approximations. Having said that, the experimental evaluation is quite interesting and in-depth. I think it would have been interesting to report comparisons with factorized variational inference (Graves, 2011) as it is a fairly standard and widely adopted in Bayesian deep learning. This would have been an interesting way to support the claims on the poor approximation offered by standard variational inference. I believe that the independence assumption across layers is a limiting factor of the proposed approximation strategy. Intuitively, changes in the weights in a given layer should affect the weights in other layers, so I would expect the posterior distribution over all the weights to reflect this through correlations across layers. I wonder how these results can be generalized to relax the independence assumption.",
"role": "assistant"
}
]
| [
" This paper proposes a Laplace approximation to approximate the posterior distribution over the parameters of deep networks.",
"The idea is interesting and the realization of the paper is good.",
"The idea builds upon previous work in scalable Gauss-Newton methods for optimization in deep networks, notably Botev et al., ICML 2017.",
"In this respect, I think that the novelty in the current submission is limited, as the approximation is essentially what proposed in Botev et al., ICML 2017.",
"The Laplace approximation requires the Hessian of the posterior, so techniques developed for Gauss-Newton optimization can straightforwardly be applied to construct Laplace approximations.",
"Having said that, the experimental evaluation is quite interesting and in-depth.",
"I think it would have been interesting to report comparisons with factorized variational inference (Graves, 2011) as it is a fairly standard and widely adopted in Bayesian deep learning.",
"This would have been an interesting way to support the claims on the poor approximation offered by standard variational inference.",
"I believe that the independence assumption across layers is a limiting factor of the proposed approximation strategy.",
"Intuitively, changes in the weights in a given layer should affect the weights in other layers, so I would expect the posterior distribution over all the weights to reflect this through correlations across layers.",
"I wonder how these results can be generalized to relax the independence assumption."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 9,
"praise": 3,
"presentation_and_reporting": 0,
"results_and_discussion": 6,
"suggestion_and_solution": 5,
"total": 11
} | 0.090909 | 0 | 0.454545 | 0.818182 | 0.272727 | 0 | 0.545455 | 0.454545 | 11 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.636364 | 2.573234 | 0.06313 |
Kronecker-factored Curvature Approximations for Recurrent Neural Networks | Kronecker-factor Approximate Curvature (Martens & Grosse, 2015) (K-FAC) is a 2nd-order optimization method which has been shown to give state-of-the-art performance on large-scale neural network optimization tasks (Ba et al., 2017). It is based on an approximation to the Fisher information matrix (FIM) that makes assumptions about the particular structure of the network and the way it is parameterized. The original K-FAC method was applicable only to fully-connected networks, although it has been recently extended by Grosse & Martens (2016) to handle convolutional networks as well. In this work we extend the method to handle RNNs by introducing a novel approximation to the FIM for RNNs. This approximation works by modelling the covariance structure between the gradient contributions at different time-steps using a chain-structured linear Gaussian graphical model, summing the various cross-covariances, and computing the inverse in closed form. We demonstrate in experiments that our method significantly outperforms general purpose state-of-the-art optimizers like SGD with momentum and Adam on several challenging RNN training tasks. | In this paper, the authors present a second-order method that is specifically designed for RNNs. The paper overall is well-written and I enjoyed reading the paper. The main idea of the paper is to extend the existing kronecker-factored algorithms to RNNs. In order to obtain a tractable formulation, the authors impose certain assumptions and provide detailed derivations. Even though the gain in the convergence speed is not very impressive and the algorithm is quite complicated and possibly not very accessible by deep learning practitioners, I still believe this is a novel and valuable contribution and will be of interest to the community. I only have some minor corrections: 1) Sec 2.1: typo "is is" 2) Sec 2.2: typo "esstentiallybe" 3) Sec 2.2: (F+lambda I) --> should be inverse 4) The authors should include a proper conclusion | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nKronecker-factored Curvature Approximations for Recurrent Neural Networks\nKronecker-factor Approximate Curvature (Martens & Grosse, 2015) (K-FAC) is a 2nd-order optimization method which has been shown to give state-of-the-art performance on large-scale neural network optimization tasks (Ba et al., 2017). It is based on an approximation to the Fisher information matrix (FIM) that makes assumptions about the particular structure of the network and the way it is parameterized. The original K-FAC method was applicable only to fully-connected networks, although it has been recently extended by Grosse & Martens (2016) to handle convolutional networks as well. In this work we extend the method to handle RNNs by introducing a novel approximation to the FIM for RNNs. This approximation works by modelling the covariance structure between the gradient contributions at different time-steps using a chain-structured linear Gaussian graphical model, summing the various cross-covariances, and computing the inverse in closed form. We demonstrate in experiments that our method significantly outperforms general purpose state-of-the-art optimizers like SGD with momentum and Adam on several challenging RNN training tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, the authors present a second-order method that is specifically designed for RNNs. The paper overall is well-written and I enjoyed reading the paper. The main idea of the paper is to extend the existing kronecker-factored algorithms to RNNs. In order to obtain a tractable formulation, the authors impose certain assumptions and provide detailed derivations. Even though the gain in the convergence speed is not very impressive and the algorithm is quite complicated and possibly not very accessible by deep learning practitioners, I still believe this is a novel and valuable contribution and will be of interest to the community. I only have some minor corrections: 1) Sec 2.1: typo \"is is\" 2) Sec 2.2: typo \"esstentiallybe\" 3) Sec 2.2: (F+lambda I) --> should be inverse 4) The authors should include a proper conclusion",
"role": "assistant"
}
]
| [
" In this paper, the authors present a second-order method that is specifically designed for RNNs.",
"The paper overall is well-written and I enjoyed reading the paper.",
"The main idea of the paper is to extend the existing kronecker-factored algorithms to RNNs.",
"In order to obtain a tractable formulation, the authors impose certain assumptions and provide detailed derivations.",
"Even though the gain in the convergence speed is not very impressive and the algorithm is quite complicated and possibly not very accessible by deep learning practitioners, I still believe this is a novel and valuable contribution and will be of interest to the community.",
"I only have some minor corrections: 1) Sec 2.1: typo \"is is\" 2) Sec 2.2: typo \"esstentiallybe\" 3) Sec 2.2: (F+lambda I) --> should be inverse 4) The authors should include a proper conclusion"
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 6
} | 0.166667 | 0.166667 | 0.166667 | 0.5 | 0.333333 | 0.5 | 0.333333 | 0.166667 | 6 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 1.559994 | 0.773339 |
The Variational Homoencoder: Learning to Infer High-Capacity Generative Models from Few Examples | Hierarchical Bayesian methods have the potential to unify many related tasks (e.g. k-shot classification, conditional, and unconditional generation) by framing each as inference within a single generative model. We show that existing approaches for learning such models can fail on expressive generative networks such as PixelCNNs, by describing the global distribution with little reliance on latent variables. To address this, we develop a modification of the Variational Autoencoder in which encoded observations are decoded to new elements from the same class; the result, which we call a Variational Homoencoder (VHE), may be understood as training a hierarchical latent variable model which better utilises latent variables in these cases. Using this framework enables us to train a hierarchical PixelCNN for the Omniglot dataset, outperforming all existing models on test set likelihood. With a single model we achieve both strong one-shot generation and near human-level classification, competitive with state-of-the-art discriminative classifiers. The VHE objective extends naturally to richer dataset structures such as factorial or hierarchical categories, as we illustrate by training models to separate character content from simple variations in drawing style, and to generalise the style of an alphabet to new characters. | The paper presents some conceptually incremental improvements over the models in “Neural Statistician” and “Generative matching networks”. Nevertheless, it is well written and I think it is solid work with reasonable convincing experiments and good results. Although, the authors use powerful PixelCNN priors and decoders and they do not really disentangle to what degree their good results rely on the capabilities of these autoregressive components. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Variational Homoencoder: Learning to Infer High-Capacity Generative Models from Few Examples\nHierarchical Bayesian methods have the potential to unify many related tasks (e.g. k-shot classification, conditional, and unconditional generation) by framing each as inference within a single generative model. We show that existing approaches for learning such models can fail on expressive generative networks such as PixelCNNs, by describing the global distribution with little reliance on latent variables. To address this, we develop a modification of the Variational Autoencoder in which encoded observations are decoded to new elements from the same class; the result, which we call a Variational Homoencoder (VHE), may be understood as training a hierarchical latent variable model which better utilises latent variables in these cases. Using this framework enables us to train a hierarchical PixelCNN for the Omniglot dataset, outperforming all existing models on test set likelihood. With a single model we achieve both strong one-shot generation and near human-level classification, competitive with state-of-the-art discriminative classifiers. The VHE objective extends naturally to richer dataset structures such as factorial or hierarchical categories, as we illustrate by training models to separate character content from simple variations in drawing style, and to generalise the style of an alphabet to new characters.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper presents some conceptually incremental improvements over the models in “Neural Statistician” and “Generative matching networks”. Nevertheless, it is well written and I think it is solid work with reasonable convincing experiments and good results. Although, the authors use powerful PixelCNN priors and decoders and they do not really disentangle to what degree their good results rely on the capabilities of these autoregressive components.",
"role": "assistant"
}
]
| [
" The paper presents some conceptually incremental improvements over the models in “Neural Statistician” and “Generative matching networks”.",
"Nevertheless, it is well written and I think it is solid work with reasonable convincing experiments and good results.",
"Although, the authors use powerful PixelCNN priors and decoders and they do not really disentangle to what degree their good results rely on the capabilities of these autoregressive components."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 3
} | 0.333333 | 0 | 0.333333 | 1 | 1 | 0.333333 | 0.666667 | 0 | 3 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 3.666667 | 2.088424 | 1.578243 |
Empirical Investigation on Model Capacity and Generalization of Neural Networks for Text | Recently, deep neural network models have shown promising opportunities for many natural language processing (NLP) tasks. In practice, the number of parameters of deep neural models is often significantly larger than the size of the training set, and its generalization behavior cannot be explained by the classic generalization theory. In this paper, with extensive experiments, we empirically investigate the model capacity and generalization of neural models for text. The experiments show that deep neural models can find patterns better than brute-force memorization. Therefore, a large-capacity model with early-stopping stochastic gradient descent (SGD) as implicit regularizer seems to be the best choice, as it has better generalization ability and higher convergence speed. | This paper is a kind of follow-up on previous work such as Zhang et al (2017) trying to empirically analyze the relationship between model capacity and generalization for deep neural networks. In particular, this paper concentrates on sequence models for text classification, using LSTMs, CNNs or Bag-of-Words. They follow similar experiments as Zhang et al (2017) which was for images and obtain similar conclusions for text models. Given previous results on this topic, the results of this paper are not surprising: all results follow what previous papers have already shown, except that now it extends to recurrent and convolutional models for text. Overall, this is a nice report, but there is not enough novelty. Smaller points: - it would have been better to use state-of-the-art models for the selected task as a baseline, and then vary some design choices accordingly (like replacing the LSTM layer by a CNN layer, or Adam by SGD, etc. The current setting feels arbitrary as it is not clear that any of the models are good (with respect to the state-of-the-art). - I didn*t understand the argument trying to convince us that SGD performs implicit generalization; I agree it should be the case, but I didn*t understand the explanation, nor understood what was the "mode selection" and the "pattern of distribution". | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEmpirical Investigation on Model Capacity and Generalization of Neural Networks for Text\nRecently, deep neural network models have shown promising opportunities for many natural language processing (NLP) tasks. In practice, the number of parameters of deep neural models is often significantly larger than the size of the training set, and its generalization behavior cannot be explained by the classic generalization theory. In this paper, with extensive experiments, we empirically investigate the model capacity and generalization of neural models for text. The experiments show that deep neural models can find patterns better than brute-force memorization. Therefore, a large-capacity model with early-stopping stochastic gradient descent (SGD) as implicit regularizer seems to be the best choice, as it has better generalization ability and higher convergence speed.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper is a kind of follow-up on previous work such as Zhang et al (2017) trying to empirically analyze the relationship between model capacity and generalization for deep neural networks. In particular, this paper concentrates on sequence models for text classification, using LSTMs, CNNs or Bag-of-Words. They follow similar experiments as Zhang et al (2017) which was for images and obtain similar conclusions for text models. Given previous results on this topic, the results of this paper are not surprising: all results follow what previous papers have already shown, except that now it extends to recurrent and convolutional models for text. Overall, this is a nice report, but there is not enough novelty. Smaller points: - it would have been better to use state-of-the-art models for the selected task as a baseline, and then vary some design choices accordingly (like replacing the LSTM layer by a CNN layer, or Adam by SGD, etc. The current setting feels arbitrary as it is not clear that any of the models are good (with respect to the state-of-the-art). - I didn*t understand the argument trying to convince us that SGD performs implicit generalization; I agree it should be the case, but I didn*t understand the explanation, nor understood what was the \"mode selection\" and the \"pattern of distribution\".",
"role": "assistant"
}
]
| [
" This paper is a kind of follow-up on previous work such as Zhang et al (2017) trying to empirically analyze the relationship between model capacity and generalization for deep neural networks.",
"In particular, this paper concentrates on sequence models for text classification, using LSTMs, CNNs or Bag-of-Words.",
"They follow similar experiments as Zhang et al (2017) which was for images and obtain similar conclusions for text models.",
"Given previous results on this topic, the results of this paper are not surprising: all results follow what previous papers have already shown, except that now it extends to recurrent and convolutional models for text.",
"Overall, this is a nice report, but there is not enough novelty.",
"Smaller points: - it would have been better to use state-of-the-art models for the selected task as a baseline, and then vary some design choices accordingly (like replacing the LSTM layer by a CNN layer, or Adam by SGD, etc.",
"The current setting feels arbitrary as it is not clear that any of the models are good (with respect to the state-of-the-art).",
"- I didn*t understand the argument trying to convince us that SGD performs implicit generalization; I agree it should be the case, but I didn*t understand the explanation, nor understood what was the \"mode selection\" and the \"pattern of distribution\"."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 8
} | 0.5 | 0 | 0.125 | 0.875 | 0.25 | 0.375 | 0.375 | 0.125 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.625 | 2.230439 | 0.394561 |
Empirical Investigation on Model Capacity and Generalization of Neural Networks for Text | Recently, deep neural network models have shown promising opportunities for many natural language processing (NLP) tasks. In practice, the number of parameters of deep neural models is often significantly larger than the size of the training set, and its generalization behavior cannot be explained by the classic generalization theory. In this paper, with extensive experiments, we empirically investigate the model capacity and generalization of neural models for text. The experiments show that deep neural models can find patterns better than brute-force memorization. Therefore, a large-capacity model with early-stopping stochastic gradient descent (SGD) as implicit regularizer seems to be the best choice, as it has better generalization ability and higher convergence speed. | Summary: As indicated by the paper*s title, this paper provides an "empirical investigation on model capacity and generalization of neural networks for text". It defines the capacity of a model to be the maximal number of random training points which a model can memorize (up to x% accuracy, for different values of x). It studies CNN, LSTM, and CBOW models, showing that larger models have larger capacity (the relationship between number of parameters , and the capacity of the model, appears to be ), and also generalize better. It also studies how the optimization method affects model capacity, showing that Adam is much more effective for memorizing random data than "vanilla SGD" or AdaGrad. Review: Quality: Experiments appear to be correct and of high quality. Clarity: The paper is relatively clear, though there are many grammatical mistakes. Also, the description of the output layer in section 3.3 didn*t make it clear to me that a single hidden layer neural network was used (I figured this out based on the number of parameters in the models) --- it made it seem that the output of the processing layer is directly connected to the softmax. Originality: The work is not very original, and the results are not surprising. Neyshabur et al (2014,2017) have already shown that larger networks generalize better than smaller networks, and Zhang et al (2017) have already shown that neural networks are able to both memorize and generalize. Significance: While the experiments presented in this paper are nice, I do not believe that the results are particularly significant or informative. Pros - Extensive experiments on NLP data. Cons - Results are not surprising - The questions tackled by the experiments are not original. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEmpirical Investigation on Model Capacity and Generalization of Neural Networks for Text\nRecently, deep neural network models have shown promising opportunities for many natural language processing (NLP) tasks. In practice, the number of parameters of deep neural models is often significantly larger than the size of the training set, and its generalization behavior cannot be explained by the classic generalization theory. In this paper, with extensive experiments, we empirically investigate the model capacity and generalization of neural models for text. The experiments show that deep neural models can find patterns better than brute-force memorization. Therefore, a large-capacity model with early-stopping stochastic gradient descent (SGD) as implicit regularizer seems to be the best choice, as it has better generalization ability and higher convergence speed.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: As indicated by the paper*s title, this paper provides an \"empirical investigation on model capacity and generalization of neural networks for text\". It defines the capacity of a model to be the maximal number of random training points which a model can memorize (up to x% accuracy, for different values of x). It studies CNN, LSTM, and CBOW models, showing that larger models have larger capacity (the relationship between number of parameters , and the capacity of the model, appears to be ), and also generalize better. It also studies how the optimization method affects model capacity, showing that Adam is much more effective for memorizing random data than \"vanilla SGD\" or AdaGrad. Review: Quality: Experiments appear to be correct and of high quality. Clarity: The paper is relatively clear, though there are many grammatical mistakes. Also, the description of the output layer in section 3.3 didn*t make it clear to me that a single hidden layer neural network was used (I figured this out based on the number of parameters in the models) --- it made it seem that the output of the processing layer is directly connected to the softmax. Originality: The work is not very original, and the results are not surprising. Neyshabur et al (2014,2017) have already shown that larger networks generalize better than smaller networks, and Zhang et al (2017) have already shown that neural networks are able to both memorize and generalize. Significance: While the experiments presented in this paper are nice, I do not believe that the results are particularly significant or informative. Pros - Extensive experiments on NLP data. Cons - Results are not surprising - The questions tackled by the experiments are not original.",
"role": "assistant"
}
]
| [
" Summary: As indicated by the paper*s title, this paper provides an \"empirical investigation on model capacity and generalization of neural networks for text\".",
"It defines the capacity of a model to be the maximal number of random training points which a model can memorize (up to x% accuracy, for different values of x).",
"It studies CNN, LSTM, and CBOW models, showing that larger models have larger capacity (the relationship between number of parameters , and the capacity of the model, appears to be ), and also generalize better.",
"It also studies how the optimization method affects model capacity, showing that Adam is much more effective for memorizing random data than \"vanilla SGD\" or AdaGrad.",
"Review: Quality: Experiments appear to be correct and of high quality.",
"Clarity: The paper is relatively clear, though there are many grammatical mistakes.",
"Also, the description of the output layer in section 3.3 didn*t make it clear to me that a single hidden layer neural network was used (I figured this out based on the number of parameters in the models) --- it made it seem that the output of the processing layer is directly connected to the softmax.",
"Originality: The work is not very original, and the results are not surprising.",
"Neyshabur et al (2014,2017) have already shown that larger networks generalize better than smaller networks, and Zhang et al (2017) have already shown that neural networks are able to both memorize and generalize.",
"Significance: While the experiments presented in this paper are nice, I do not believe that the results are particularly significant or informative.",
"Pros - Extensive experiments on NLP data.",
"Cons - Results are not surprising - The questions tackled by the experiments are not original."
] | {
"criticism": 5,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 4,
"suggestion_and_solution": 0,
"total": 12
} | 0.416667 | 0.083333 | 0.166667 | 0.75 | 0.25 | 0.25 | 0.333333 | 0 | 12 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.25 | 2.234218 | 0.015782 |
Empirical Investigation on Model Capacity and Generalization of Neural Networks for Text | Recently, deep neural network models have shown promising opportunities for many natural language processing (NLP) tasks. In practice, the number of parameters of deep neural models is often significantly larger than the size of the training set, and its generalization behavior cannot be explained by the classic generalization theory. In this paper, with extensive experiments, we empirically investigate the model capacity and generalization of neural models for text. The experiments show that deep neural models can find patterns better than brute-force memorization. Therefore, a large-capacity model with early-stopping stochastic gradient descent (SGD) as implicit regularizer seems to be the best choice, as it has better generalization ability and higher convergence speed. | The quality and clarity of this manuscript are somewhat low. While an empirical exploration of capacity and generalization on text could provide novel insights, the experiments are performed only on one task; a medium-small sentiment analysis dataset, and there are no novel conclusions reached. Pros: - experimental results are clearly presented - most details of experiments are explained - the basic outline of the paper explores an interesting question Cons - many sentences do not have a clear point, and there are many spelling and grammar errors - Despite claims to the contrary, the experiments are far from extensive; they explore two models, with variations in context for embeddings, with 3 optimizers, on just one task. To be an extensive discussion of "text", I would expect to see language-modeling and seq-to-seq tasks as well - the majority of the manuscript is taken up by overly long explanations of background concepts (e.g. half a page on risk minimization, half a page on "different kinds of neural networks" with vague and occasionally misleading descriptions of RNN vs CNN, containing zero citations) - the related work is significantly lacking, and it appears that the authors are unaware of results and published work which would be directly relevant to theirs (e.g. Arpit et al. A closer look at memorization in deep networks ; Advani and Saxe Generalization error dynamics in high dimensions) - in the introduction, generalization is not clearly explained, which seems kind of like an important thing to do... - VC is a measure/set, not a norm? - the section "Linked to the mode selection of distribution" is unclear. I don*t see the justification for assuming that a decrease in test accuracy indicates mode selection, and don*t understand figure 4b (it*s just two bumps labelled mode 1 and mode 2 with an arrow between them. What are the axes of this plot?? Is this real data, or just a vague illustration of mode selection?) - Did you tune the learning rate for the adagrad and gradient descent methods? I find it very surprising that they would fail to converge. - experiments are not described in sufficient detail to be reproduced. What are the other hyperparameters for the various experiments? Only test set results are plotted; what about training/validation curves? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEmpirical Investigation on Model Capacity and Generalization of Neural Networks for Text\nRecently, deep neural network models have shown promising opportunities for many natural language processing (NLP) tasks. In practice, the number of parameters of deep neural models is often significantly larger than the size of the training set, and its generalization behavior cannot be explained by the classic generalization theory. In this paper, with extensive experiments, we empirically investigate the model capacity and generalization of neural models for text. The experiments show that deep neural models can find patterns better than brute-force memorization. Therefore, a large-capacity model with early-stopping stochastic gradient descent (SGD) as implicit regularizer seems to be the best choice, as it has better generalization ability and higher convergence speed.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The quality and clarity of this manuscript are somewhat low. While an empirical exploration of capacity and generalization on text could provide novel insights, the experiments are performed only on one task; a medium-small sentiment analysis dataset, and there are no novel conclusions reached. Pros: - experimental results are clearly presented - most details of experiments are explained - the basic outline of the paper explores an interesting question Cons - many sentences do not have a clear point, and there are many spelling and grammar errors - Despite claims to the contrary, the experiments are far from extensive; they explore two models, with variations in context for embeddings, with 3 optimizers, on just one task. To be an extensive discussion of \"text\", I would expect to see language-modeling and seq-to-seq tasks as well - the majority of the manuscript is taken up by overly long explanations of background concepts (e.g. half a page on risk minimization, half a page on \"different kinds of neural networks\" with vague and occasionally misleading descriptions of RNN vs CNN, containing zero citations) - the related work is significantly lacking, and it appears that the authors are unaware of results and published work which would be directly relevant to theirs (e.g. Arpit et al. A closer look at memorization in deep networks ; Advani and Saxe Generalization error dynamics in high dimensions) - in the introduction, generalization is not clearly explained, which seems kind of like an important thing to do... - VC is a measure/set, not a norm? - the section \"Linked to the mode selection of distribution\" is unclear. I don*t see the justification for assuming that a decrease in test accuracy indicates mode selection, and don*t understand figure 4b (it*s just two bumps labelled mode 1 and mode 2 with an arrow between them. What are the axes of this plot?? Is this real data, or just a vague illustration of mode selection?) - Did you tune the learning rate for the adagrad and gradient descent methods? I find it very surprising that they would fail to converge. - experiments are not described in sufficient detail to be reproduced. What are the other hyperparameters for the various experiments? Only test set results are plotted; what about training/validation curves?",
"role": "assistant"
}
]
| [
" The quality and clarity of this manuscript are somewhat low.",
"While an empirical exploration of capacity and generalization on text could provide novel insights, the experiments are performed only on one task; a medium-small sentiment analysis dataset, and there are no novel conclusions reached.",
"Pros: - experimental results are clearly presented - most details of experiments are explained - the basic outline of the paper explores an interesting question Cons - many sentences do not have a clear point, and there are many spelling and grammar errors - Despite claims to the contrary, the experiments are far from extensive; they explore two models, with variations in context for embeddings, with 3 optimizers, on just one task.",
"To be an extensive discussion of \"text\", I would expect to see language-modeling and seq-to-seq tasks as well - the majority of the manuscript is taken up by overly long explanations of background concepts (e.g.",
"half a page on risk minimization, half a page on \"different kinds of neural networks\" with vague and occasionally misleading descriptions of RNN vs CNN, containing zero citations) - the related work is significantly lacking, and it appears that the authors are unaware of results and published work which would be directly relevant to theirs (e.g.",
"Arpit et al.",
"A closer look at memorization in deep networks ; Advani and Saxe Generalization error dynamics in high dimensions) - in the introduction, generalization is not clearly explained, which seems kind of like an important thing to do... - VC is a measure/set, not a norm?",
"- the section \"Linked to the mode selection of distribution\" is unclear.",
"I don*t see the justification for assuming that a decrease in test accuracy indicates mode selection, and don*t understand figure 4b (it*s just two bumps labelled mode 1 and mode 2 with an arrow between them.",
"What are the axes of this plot??",
"Is this real data, or just a vague illustration of mode selection?)",
"- Did you tune the learning rate for the adagrad and gradient descent methods?",
"I find it very surprising that they would fail to converge.",
"- experiments are not described in sufficient detail to be reproduced.",
"What are the other hyperparameters for the various experiments?",
"Only test set results are plotted; what about training/validation curves?"
] | {
"criticism": 9,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 11,
"praise": 1,
"presentation_and_reporting": 7,
"results_and_discussion": 6,
"suggestion_and_solution": 0,
"total": 16
} | 0.5625 | 0.0625 | 0.125 | 0.6875 | 0.0625 | 0.4375 | 0.375 | 0 | 16 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.3125 | 2.170458 | 0.142042 |
GATED FAST WEIGHTS FOR ASSOCIATIVE RETRIEVAL | We improve previous end-to-end differentiable neural networks (NNs) with fast weight memories. A gate mechanism updates fast weights at every time step of a sequence through two separate outer-product-based matrices generated by slow parts of the net. The system is trained on a complex sequence to sequence variation of the Associative Retrieval Problem with roughly 70 times more temporal memory (i.e. time-varying variables) than similar-sized standard recurrent NNs (RNNs). In terms of accuracy and number of parameters, our architecture outperforms a variety of RNNs, including Long Short-Term Memory, Hypernetworks, and related fast weight architectures. | The authors present an evolution of the idea of fast weights: training a double recurrent neural network, one "slow" trained as usual and one "fast" that gets updated in every time-step based on the slow network. The authors generalize this idea in a nice way and present results on 1 experiment. On the positive side, the paper is clearly written and while the fast-weights are not new, the details of the presented method are original. On the negative side, the experimental results are presented on only 1 experiment with a data-set and task made up by the authors. The results are good but the improvements are not too large, and they are measured over weak baselines implemented by the authors. For a convincing result, one would require an evaluation on a number of tasks, including long-studied ones like language modeling, and comparison to stronger related models, such as the Neural Turing Machine or the Transformer (from "Attention is All You Need"). Without comparison to stronger baselines and with results only on 1 task constructed by the authors, we have to recommend rejection. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGATED FAST WEIGHTS FOR ASSOCIATIVE RETRIEVAL\nWe improve previous end-to-end differentiable neural networks (NNs) with fast weight memories. A gate mechanism updates fast weights at every time step of a sequence through two separate outer-product-based matrices generated by slow parts of the net. The system is trained on a complex sequence to sequence variation of the Associative Retrieval Problem with roughly 70 times more temporal memory (i.e. time-varying variables) than similar-sized standard recurrent NNs (RNNs). In terms of accuracy and number of parameters, our architecture outperforms a variety of RNNs, including Long Short-Term Memory, Hypernetworks, and related fast weight architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors present an evolution of the idea of fast weights: training a double recurrent neural network, one \"slow\" trained as usual and one \"fast\" that gets updated in every time-step based on the slow network. The authors generalize this idea in a nice way and present results on 1 experiment. On the positive side, the paper is clearly written and while the fast-weights are not new, the details of the presented method are original. On the negative side, the experimental results are presented on only 1 experiment with a data-set and task made up by the authors. The results are good but the improvements are not too large, and they are measured over weak baselines implemented by the authors. For a convincing result, one would require an evaluation on a number of tasks, including long-studied ones like language modeling, and comparison to stronger related models, such as the Neural Turing Machine or the Transformer (from \"Attention is All You Need\"). Without comparison to stronger baselines and with results only on 1 task constructed by the authors, we have to recommend rejection.",
"role": "assistant"
}
]
| [
" The authors present an evolution of the idea of fast weights: training a double recurrent neural network, one \"slow\" trained as usual and one \"fast\" that gets updated in every time-step based on the slow network.",
"The authors generalize this idea in a nice way and present results on 1 experiment.",
"On the positive side, the paper is clearly written and while the fast-weights are not new, the details of the presented method are original.",
"On the negative side, the experimental results are presented on only 1 experiment with a data-set and task made up by the authors.",
"The results are good but the improvements are not too large, and they are measured over weak baselines implemented by the authors.",
"For a convincing result, one would require an evaluation on a number of tasks, including long-studied ones like language modeling, and comparison to stronger related models, such as the Neural Turing Machine or the Transformer (from \"Attention is All You Need\").",
"Without comparison to stronger baselines and with results only on 1 task constructed by the authors, we have to recommend rejection."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 5,
"suggestion_and_solution": 2,
"total": 7
} | 0.285714 | 0.142857 | 0 | 0.857143 | 0.428571 | 0.142857 | 0.714286 | 0.285714 | 7 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.857143 | 2.288975 | 0.568167 |
Deep Complex Networks | At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech spectrum prediction using TIMIT. We achieve state-of-the-art performance on these audio-related tasks. | The paper presents an extensive framework for complex-valued neural networks. Related literature suggests a variety of motivations for complex valued neural networks: biological evidence, richer representation capacity, easier optimization, faster learning, noise-robust memory retrieval mechanisms and more. The contribution of the current work does not lie in presenting significantly superior results, compared to the traditional real-valued neural networks, but rather in developing an extensive framework for applying and conducting research with complex-valued neural networks. Indeed, the most standard work nowadays with real-valued neural networks depends on a variety of already well-established techniques for weight initialization, regularization, activation function, convolutions, etc. In this work, the complex equivalent of many of these basics tools are developed, such as a number of complex activation functions, complex batch normalization, complex convolution, discussion of complex differentiability, strategies for complex weight initialization, complex equivalent of a residual neural network. Empirical results show that the new complex-flavored neural networks achieve generally comparable performance to their real-valued counterparts, on a variety of different tasks. Then again, the major contribution of this work is not advancing the state-of-the-art on many benchmark tasks, but constructing a solid framework that will enable stable and solid application and research of these well-motivated models. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Complex Networks\nAt present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech spectrum prediction using TIMIT. We achieve state-of-the-art performance on these audio-related tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper presents an extensive framework for complex-valued neural networks. Related literature suggests a variety of motivations for complex valued neural networks: biological evidence, richer representation capacity, easier optimization, faster learning, noise-robust memory retrieval mechanisms and more. The contribution of the current work does not lie in presenting significantly superior results, compared to the traditional real-valued neural networks, but rather in developing an extensive framework for applying and conducting research with complex-valued neural networks. Indeed, the most standard work nowadays with real-valued neural networks depends on a variety of already well-established techniques for weight initialization, regularization, activation function, convolutions, etc. In this work, the complex equivalent of many of these basics tools are developed, such as a number of complex activation functions, complex batch normalization, complex convolution, discussion of complex differentiability, strategies for complex weight initialization, complex equivalent of a residual neural network. Empirical results show that the new complex-flavored neural networks achieve generally comparable performance to their real-valued counterparts, on a variety of different tasks. Then again, the major contribution of this work is not advancing the state-of-the-art on many benchmark tasks, but constructing a solid framework that will enable stable and solid application and research of these well-motivated models.",
"role": "assistant"
}
]
| [
" The paper presents an extensive framework for complex-valued neural networks.",
"Related literature suggests a variety of motivations for complex valued neural networks: biological evidence, richer representation capacity, easier optimization, faster learning, noise-robust memory retrieval mechanisms and more.",
"The contribution of the current work does not lie in presenting significantly superior results, compared to the traditional real-valued neural networks, but rather in developing an extensive framework for applying and conducting research with complex-valued neural networks.",
"Indeed, the most standard work nowadays with real-valued neural networks depends on a variety of already well-established techniques for weight initialization, regularization, activation function, convolutions, etc.",
"In this work, the complex equivalent of many of these basics tools are developed, such as a number of complex activation functions, complex batch normalization, complex convolution, discussion of complex differentiability, strategies for complex weight initialization, complex equivalent of a residual neural network.",
"Empirical results show that the new complex-flavored neural networks achieve generally comparable performance to their real-valued counterparts, on a variety of different tasks.",
"Then again, the major contribution of this work is not advancing the state-of-the-art on many benchmark tasks, but constructing a solid framework that will enable stable and solid application and research of these well-motivated models."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 0,
"total": 7
} | 0 | 0 | 0.714286 | 0.857143 | 0.142857 | 0 | 0.428571 | 0 | 7 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 2.142857 | 1.57469 | 0.568167 |
Deep Complex Networks | At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech spectrum prediction using TIMIT. We achieve state-of-the-art performance on these audio-related tasks. | This paper defines building blocks for complex-valued convolutional neural networks: complex convolutions, complex batch normalisation, several variants of the ReLU nonlinearity for complex inputs, and an initialisation strategy. The writing is clear, concise and easy to follow. An important argument in favour of using complex-valued networks is said to be the propagation of phase information. However, I feel that the observation that CReLU works best out of the 3 proposed alternatives contradicts this somewhat. CReLU simply applies ReLU component-wise to the real and imaginary parts, which has an effect on the phase information that is hard to conceptualise. It definitely does not preserve phase, like modReLU would. This makes me wonder whether the "complex numbers" paradigm is applied meaningfully here, or whether this is just an arbitrary way of doing some parameter sharing in convnets that happens to work reasonably well (note that even completely random parameter tying can work well, as shown in "Compressing neural networks with the hashing trick" by Chen et al.). Some more insight into how phase information is used, what it represents and how it is propagated through the network would help to make sense of this. The image recognition results are mostly inconclusive, which makes it hard to assess the benefit of this approach. The improved performance on the audio tasks seems significant, but how the complex nature of the networks helps achieve this is not really demonstrated. It is unclear how the phase information in the input waveform is transformed into the phase of the complex activations in the network (because I think it is implied that this is what happens). This connection is a bit vague. Once again, a more in-depth analysis of this phase behavior would be very welcome. I*m on the fence about this work: I like the ideas and they are explained well, but I*m missing some insight into why and how all of this is actually helping to improve performance (especially w.r.t. how phase information is used). Comments: - The related work section is comprehensive but a bit unstructured, with each new paragraph seemingly describing a completely different type of work. Maybe some subsection titles would help make it feel a bit more cohesive. - page 3: "(cite a couple of them)" should be replaced by some actual references :) - Although care is taken to ensure that the complex and real-valued networks that are compared in the experiments have roughly the same number of parameters, doesn*t the complex version always require more computation on account of there being more filters in each layer? It would be nice to discuss computational cost as well. REVISION: I have decided to raise my rating from 5 to 7 as I feel that the authors have adequately addressed many of my comments. In particular, I really appreciated the additional appendix sections to clarify what actually happens as the phase information is propagated through the network. Regarding the CIFAR results, I may have read over it, but I think it would be good to state even more clearly that these experiments constitute a sanity check, as both reviewer 1 and myself were seemingly unaware of this. With this in mind, it is of course completely fine that the results are not better than for real-valued networks. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Complex Networks\nAt present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech spectrum prediction using TIMIT. We achieve state-of-the-art performance on these audio-related tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper defines building blocks for complex-valued convolutional neural networks: complex convolutions, complex batch normalisation, several variants of the ReLU nonlinearity for complex inputs, and an initialisation strategy. The writing is clear, concise and easy to follow. An important argument in favour of using complex-valued networks is said to be the propagation of phase information. However, I feel that the observation that CReLU works best out of the 3 proposed alternatives contradicts this somewhat. CReLU simply applies ReLU component-wise to the real and imaginary parts, which has an effect on the phase information that is hard to conceptualise. It definitely does not preserve phase, like modReLU would. This makes me wonder whether the \"complex numbers\" paradigm is applied meaningfully here, or whether this is just an arbitrary way of doing some parameter sharing in convnets that happens to work reasonably well (note that even completely random parameter tying can work well, as shown in \"Compressing neural networks with the hashing trick\" by Chen et al.). Some more insight into how phase information is used, what it represents and how it is propagated through the network would help to make sense of this. The image recognition results are mostly inconclusive, which makes it hard to assess the benefit of this approach. The improved performance on the audio tasks seems significant, but how the complex nature of the networks helps achieve this is not really demonstrated. It is unclear how the phase information in the input waveform is transformed into the phase of the complex activations in the network (because I think it is implied that this is what happens). This connection is a bit vague. Once again, a more in-depth analysis of this phase behavior would be very welcome. I*m on the fence about this work: I like the ideas and they are explained well, but I*m missing some insight into why and how all of this is actually helping to improve performance (especially w.r.t. how phase information is used). Comments: - The related work section is comprehensive but a bit unstructured, with each new paragraph seemingly describing a completely different type of work. Maybe some subsection titles would help make it feel a bit more cohesive. - page 3: \"(cite a couple of them)\" should be replaced by some actual references :) - Although care is taken to ensure that the complex and real-valued networks that are compared in the experiments have roughly the same number of parameters, doesn*t the complex version always require more computation on account of there being more filters in each layer? It would be nice to discuss computational cost as well. REVISION: I have decided to raise my rating from 5 to 7 as I feel that the authors have adequately addressed many of my comments. In particular, I really appreciated the additional appendix sections to clarify what actually happens as the phase information is propagated through the network. Regarding the CIFAR results, I may have read over it, but I think it would be good to state even more clearly that these experiments constitute a sanity check, as both reviewer 1 and myself were seemingly unaware of this. With this in mind, it is of course completely fine that the results are not better than for real-valued networks.",
"role": "assistant"
}
]
| [
" This paper defines building blocks for complex-valued convolutional neural networks: complex convolutions, complex batch normalisation, several variants of the ReLU nonlinearity for complex inputs, and an initialisation strategy.",
"The writing is clear, concise and easy to follow.",
"An important argument in favour of using complex-valued networks is said to be the propagation of phase information.",
"However, I feel that the observation that CReLU works best out of the 3 proposed alternatives contradicts this somewhat.",
"CReLU simply applies ReLU component-wise to the real and imaginary parts, which has an effect on the phase information that is hard to conceptualise.",
"It definitely does not preserve phase, like modReLU would.",
"This makes me wonder whether the \"complex numbers\" paradigm is applied meaningfully here, or whether this is just an arbitrary way of doing some parameter sharing in convnets that happens to work reasonably well (note that even completely random parameter tying can work well, as shown in \"Compressing neural networks with the hashing trick\" by Chen et al.).",
"Some more insight into how phase information is used, what it represents and how it is propagated through the network would help to make sense of this.",
"The image recognition results are mostly inconclusive, which makes it hard to assess the benefit of this approach.",
"The improved performance on the audio tasks seems significant, but how the complex nature of the networks helps achieve this is not really demonstrated.",
"It is unclear how the phase information in the input waveform is transformed into the phase of the complex activations in the network (because I think it is implied that this is what happens).",
"This connection is a bit vague.",
"Once again, a more in-depth analysis of this phase behavior would be very welcome.",
"I*m on the fence about this work: I like the ideas and they are explained well, but I*m missing some insight into why and how all of this is actually helping to improve performance (especially w.r.t.",
"how phase information is used).",
"Comments: - The related work section is comprehensive but a bit unstructured, with each new paragraph seemingly describing a completely different type of work.",
"Maybe some subsection titles would help make it feel a bit more cohesive.",
"- page 3: \"(cite a couple of them)\" should be replaced by some actual references :) - Although care is taken to ensure that the complex and real-valued networks that are compared in the experiments have roughly the same number of parameters, doesn*t the complex version always require more computation on account of there being more filters in each layer?",
"It would be nice to discuss computational cost as well.",
"REVISION: I have decided to raise my rating from 5 to 7 as I feel that the authors have adequately addressed many of my comments.",
"In particular, I really appreciated the additional appendix sections to clarify what actually happens as the phase information is propagated through the network.",
"Regarding the CIFAR results, I may have read over it, but I think it would be good to state even more clearly that these experiments constitute a sanity check, as both reviewer 1 and myself were seemingly unaware of this.",
"With this in mind, it is of course completely fine that the results are not better than for real-valued networks."
] | {
"criticism": 9,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 10,
"praise": 4,
"presentation_and_reporting": 8,
"results_and_discussion": 5,
"suggestion_and_solution": 8,
"total": 23
} | 0.391304 | 0.086957 | 0.130435 | 0.434783 | 0.173913 | 0.347826 | 0.217391 | 0.347826 | 23 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.130435 | 0.552192 | 1.578243 |
Distributional Inclusion Vector Embedding for Unsupervised Hypernymy Detection | Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, relation extraction, and question answering. Supervised learning from labeled hypernym sources, such as WordNet, limit the coverage of these models, which can be addressed by learning hypernyms from unlabeled text. Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy. This paper introduces {it distributional inclusion vector embedding (DIVE)}, a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings which preserve the inclusion property of word contexts. In experimental evaluations more comprehensive than any previous literature of which we are aware---evaluating on 11 datasets using multiple existing as well as newly proposed scoring functions---we find that our method provides up to double the precision of previous unsupervised methods, and the highest average performance, using a much more compact word representation, and yielding many new state-of-the-art results. In addition, the meaning of each dimension in DIVE is interpretable, which leads to a novel approach on word sense disambiguation as another promising application of DIVE. | This paper presents a new method for detecting hypernymy by extending the distributional inclusion hypothesis to low-rank embeddings. The first half of the paper is written superbly, providing a sober account of the state-of-the-art in hypernymy detection via distributional semantics, and a clear, well-motivated explanation of the main algorithm (DIVE). The reformulation of PMI is interesting; the authors essentially replace P(w) with the uniform distribution (this should be stated explicitly in the paper, BTW). They then augment SGNS to reflect this change by dynamically selecting the number of negatively-sampled contexts according to the target word (w). This is a clever trick. I was also very happy to see that the authors evaluated on virtually every lexical inference dataset available. However, the remainder of the paper describing the experiments and their results was extremely difficult to parse. First of all, the line between the authors* contribution and prior art is blurry, because they seem to be introducing new metrics for measuring hypernymy as part of the experiments. There are also too many moving parts: datasets, evaluation metrics, detection metrics, embedding methods, hyperparameters, etc. These need to be organized, controlled for, and clearly explained. I have two major concerns regarding the experiments, beyond intelligibility. First, the authors used a specific corpus (with specific preprocessing) to train their vectors, but compared their results to those reported in other papers, which are not based on the same data. This invalidates these comparisons. Instead, the other methods should be replicated and rerun on exactly the same corpus. I would also recommend using a much larger corpus, since 50M tokens is considered quite small when training word embeddings. My second concern is that summation (dS) is doing all the heavy lifting. In Table 2, we can see that the difference is only 0.1 between dS and W * dS (where W is also not trained using DIVE). Since dS is basically a proxy for difference in word frequency, could it be that the proposed method is just computing which word is more general? This looks awfully familiar to Levy et al*s prototypical hypernym result. Miscellaneous Comments: - There*s a very recent paper on using vector norms for detecting hypernymy that might be worth contrasting with: https://arxiv.org/pdf/1710.06371.pdf - Micro-averaging these datasets is problematic, because some of the datasets based on WordNet are much larger than the hand-annotated ones, and will likely drown them out. Because these datasets are so different, I think it is critical to look at the details and not only at the averages. - PMI filtering needs to be controlled/ablated in the experiments. - While explaining equation 6, the authors say that the gradients for x and y are similar; this is not true, because k is a function of x (or y), and if one appears more often in general (not necessarily with c), then the gradients will be different as well. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDistributional Inclusion Vector Embedding for Unsupervised Hypernymy Detection\nModeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, relation extraction, and question answering. Supervised learning from labeled hypernym sources, such as WordNet, limit the coverage of these models, which can be addressed by learning hypernyms from unlabeled text. Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy. This paper introduces {it distributional inclusion vector embedding (DIVE)}, a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings which preserve the inclusion property of word contexts. In experimental evaluations more comprehensive than any previous literature of which we are aware---evaluating on 11 datasets using multiple existing as well as newly proposed scoring functions---we find that our method provides up to double the precision of previous unsupervised methods, and the highest average performance, using a much more compact word representation, and yielding many new state-of-the-art results. In addition, the meaning of each dimension in DIVE is interpretable, which leads to a novel approach on word sense disambiguation as another promising application of DIVE.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents a new method for detecting hypernymy by extending the distributional inclusion hypothesis to low-rank embeddings. The first half of the paper is written superbly, providing a sober account of the state-of-the-art in hypernymy detection via distributional semantics, and a clear, well-motivated explanation of the main algorithm (DIVE). The reformulation of PMI is interesting; the authors essentially replace P(w) with the uniform distribution (this should be stated explicitly in the paper, BTW). They then augment SGNS to reflect this change by dynamically selecting the number of negatively-sampled contexts according to the target word (w). This is a clever trick. I was also very happy to see that the authors evaluated on virtually every lexical inference dataset available. However, the remainder of the paper describing the experiments and their results was extremely difficult to parse. First of all, the line between the authors* contribution and prior art is blurry, because they seem to be introducing new metrics for measuring hypernymy as part of the experiments. There are also too many moving parts: datasets, evaluation metrics, detection metrics, embedding methods, hyperparameters, etc. These need to be organized, controlled for, and clearly explained. I have two major concerns regarding the experiments, beyond intelligibility. First, the authors used a specific corpus (with specific preprocessing) to train their vectors, but compared their results to those reported in other papers, which are not based on the same data. This invalidates these comparisons. Instead, the other methods should be replicated and rerun on exactly the same corpus. I would also recommend using a much larger corpus, since 50M tokens is considered quite small when training word embeddings. My second concern is that summation (dS) is doing all the heavy lifting. In Table 2, we can see that the difference is only 0.1 between dS and W * dS (where W is also not trained using DIVE). Since dS is basically a proxy for difference in word frequency, could it be that the proposed method is just computing which word is more general? This looks awfully familiar to Levy et al*s prototypical hypernym result. Miscellaneous Comments: - There*s a very recent paper on using vector norms for detecting hypernymy that might be worth contrasting with: https://arxiv.org/pdf/1710.06371.pdf - Micro-averaging these datasets is problematic, because some of the datasets based on WordNet are much larger than the hand-annotated ones, and will likely drown them out. Because these datasets are so different, I think it is critical to look at the details and not only at the averages. - PMI filtering needs to be controlled/ablated in the experiments. - While explaining equation 6, the authors say that the gradients for x and y are similar; this is not true, because k is a function of x (or y), and if one appears more often in general (not necessarily with c), then the gradients will be different as well.",
"role": "assistant"
}
]
| [
" This paper presents a new method for detecting hypernymy by extending the distributional inclusion hypothesis to low-rank embeddings.",
"The first half of the paper is written superbly, providing a sober account of the state-of-the-art in hypernymy detection via distributional semantics, and a clear, well-motivated explanation of the main algorithm (DIVE).",
"The reformulation of PMI is interesting; the authors essentially replace P(w) with the uniform distribution (this should be stated explicitly in the paper, BTW).",
"They then augment SGNS to reflect this change by dynamically selecting the number of negatively-sampled contexts according to the target word (w).",
"This is a clever trick.",
"I was also very happy to see that the authors evaluated on virtually every lexical inference dataset available.",
"However, the remainder of the paper describing the experiments and their results was extremely difficult to parse.",
"First of all, the line between the authors* contribution and prior art is blurry, because they seem to be introducing new metrics for measuring hypernymy as part of the experiments.",
"There are also too many moving parts: datasets, evaluation metrics, detection metrics, embedding methods, hyperparameters, etc.",
"These need to be organized, controlled for, and clearly explained.",
"I have two major concerns regarding the experiments, beyond intelligibility.",
"First, the authors used a specific corpus (with specific preprocessing) to train their vectors, but compared their results to those reported in other papers, which are not based on the same data.",
"This invalidates these comparisons.",
"Instead, the other methods should be replicated and rerun on exactly the same corpus.",
"I would also recommend using a much larger corpus, since 50M tokens is considered quite small when training word embeddings.",
"My second concern is that summation (dS) is doing all the heavy lifting.",
"In Table 2, we can see that the difference is only 0.1 between dS and W * dS (where W is also not trained using DIVE).",
"Since dS is basically a proxy for difference in word frequency, could it be that the proposed method is just computing which word is more general?",
"This looks awfully familiar to Levy et al*s prototypical hypernym result.",
"Miscellaneous Comments: - There*s a very recent paper on using vector norms for detecting hypernymy that might be worth contrasting with: https://arxiv.org/pdf/1710.06371.pdf - Micro-averaging these datasets is problematic, because some of the datasets based on WordNet are much larger than the hand-annotated ones, and will likely drown them out.",
"Because these datasets are so different, I think it is critical to look at the details and not only at the averages.",
"- PMI filtering needs to be controlled/ablated in the experiments.",
"- While explaining equation 6, the authors say that the gradients for x and y are similar; this is not true, because k is a function of x (or y), and if one appears more often in general (not necessarily with c), then the gradients will be different as well."
] | {
"criticism": 7,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 18,
"praise": 4,
"presentation_and_reporting": 9,
"results_and_discussion": 5,
"suggestion_and_solution": 5,
"total": 23
} | 0.304348 | 0.086957 | 0.130435 | 0.782609 | 0.173913 | 0.391304 | 0.217391 | 0.217391 | 23 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.304348 | 0.726105 | 1.578243 |
Understanding Short-Horizon Bias in Stochastic Meta-Optimization | Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the meta-objective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if meta-optimization is to scale to practical neural net training regimes. | This paper proposes a simple problem to demonstrate the short-horizon bias of the learning rate meta-optimization. - The idealized case of quadratic function the analytical solution offers a good way to understand how T-step look ahead can benefit the meta-algorithm. - The second part of the paper seems to be a bit disconnected to the quadratic function analysis. It would be helpful to understand if there is gap between gradient based meta-optimization and the best effort(given by the analytical solution) - Unfortunately, no guideline or solution is offered in the paper. In summary, the idealized model gives a good demonstration of the problem itself. I think it might be of interest to some audiences in ICLR. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding Short-Horizon Bias in Stochastic Meta-Optimization\nCareful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the meta-objective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if meta-optimization is to scale to practical neural net training regimes.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a simple problem to demonstrate the short-horizon bias of the learning rate meta-optimization. - The idealized case of quadratic function the analytical solution offers a good way to understand how T-step look ahead can benefit the meta-algorithm. - The second part of the paper seems to be a bit disconnected to the quadratic function analysis. It would be helpful to understand if there is gap between gradient based meta-optimization and the best effort(given by the analytical solution) - Unfortunately, no guideline or solution is offered in the paper. In summary, the idealized model gives a good demonstration of the problem itself. I think it might be of interest to some audiences in ICLR.",
"role": "assistant"
}
]
| [
" This paper proposes a simple problem to demonstrate the short-horizon bias of the learning rate meta-optimization.",
"- The idealized case of quadratic function the analytical solution offers a good way to understand how T-step look ahead can benefit the meta-algorithm.",
"- The second part of the paper seems to be a bit disconnected to the quadratic function analysis.",
"It would be helpful to understand if there is gap between gradient based meta-optimization and the best effort(given by the analytical solution) - Unfortunately, no guideline or solution is offered in the paper.",
"In summary, the idealized model gives a good demonstration of the problem itself.",
"I think it might be of interest to some audiences in ICLR."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 6
} | 0.166667 | 0 | 0.333333 | 0.666667 | 0.5 | 0.166667 | 0.166667 | 0.166667 | 6 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.166667 | 1.393328 | 0.773339 |
Understanding Short-Horizon Bias in Stochastic Meta-Optimization | Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the meta-objective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if meta-optimization is to scale to practical neural net training regimes. | The paper discusses the problems of meta optimization with small look-ahead: do small runs bias the results of tuning? The result is yes and the authors show how differently the tuning can be compared to tuning the full run. The Greedy schedules are far inferior to hand-tuned schedules as they focus on optimizing the large eigenvalues while the small eigenvalues can not be "seen" with a small lookahead. The authors show that this effect is caused by the noise in the obective function. pro: - Thorough discussion of the issue with theoretical understanding on small benchmark functions as well as theoretical work - Easy to read and follow cons: -Small issues in presentation: * Figure 2 "optimal learning rate" -> "optimal greedy learning rate", also reference to Theorem 2 for increased clarity. * The optimized learning rate in 2.3 is not described. This reduces reproducibility. * Figure 4 misses the red trajectories, also it would be easier to have colors on the same (log?)-scale. The text unfortunately does not explain why the loss function looks so vastly different with different look-ahead. I would assume from the description that the colors are based on the final loss values obtaine dby choosing a fixed pair of decay exponent and effective LR. Typos and notation: page 7 last paragraph: "We train the all" -> We train all notation page 5: i find .nabla_{.theta_i} confusing when .theta_i is a scalar, i would propose .frac{.partial}{.partial .theta_i} page 2: "But this would come at the expense of long-term optimization process": at this point of the paper it is not clear how or why this should happen. Maybe add a sentence regarding the large/Small eigenvalues? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding Short-Horizon Bias in Stochastic Meta-Optimization\nCareful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the meta-objective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if meta-optimization is to scale to practical neural net training regimes.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper discusses the problems of meta optimization with small look-ahead: do small runs bias the results of tuning? The result is yes and the authors show how differently the tuning can be compared to tuning the full run. The Greedy schedules are far inferior to hand-tuned schedules as they focus on optimizing the large eigenvalues while the small eigenvalues can not be \"seen\" with a small lookahead. The authors show that this effect is caused by the noise in the obective function. pro: - Thorough discussion of the issue with theoretical understanding on small benchmark functions as well as theoretical work - Easy to read and follow cons: -Small issues in presentation: * Figure 2 \"optimal learning rate\" -> \"optimal greedy learning rate\", also reference to Theorem 2 for increased clarity. * The optimized learning rate in 2.3 is not described. This reduces reproducibility. * Figure 4 misses the red trajectories, also it would be easier to have colors on the same (log?)-scale. The text unfortunately does not explain why the loss function looks so vastly different with different look-ahead. I would assume from the description that the colors are based on the final loss values obtaine dby choosing a fixed pair of decay exponent and effective LR. Typos and notation: page 7 last paragraph: \"We train the all\" -> We train all notation page 5: i find .nabla_{.theta_i} confusing when .theta_i is a scalar, i would propose .frac{.partial}{.partial .theta_i} page 2: \"But this would come at the expense of long-term optimization process\": at this point of the paper it is not clear how or why this should happen. Maybe add a sentence regarding the large/Small eigenvalues?",
"role": "assistant"
}
]
| [
" The paper discusses the problems of meta optimization with small look-ahead: do small runs bias the results of tuning?",
"The result is yes and the authors show how differently the tuning can be compared to tuning the full run.",
"The Greedy schedules are far inferior to hand-tuned schedules as they focus on optimizing the large eigenvalues while the small eigenvalues can not be \"seen\" with a small lookahead.",
"The authors show that this effect is caused by the noise in the obective function.",
"pro: - Thorough discussion of the issue with theoretical understanding on small benchmark functions as well as theoretical work - Easy to read and follow cons: -Small issues in presentation: * Figure 2 \"optimal learning rate\" -> \"optimal greedy learning rate\", also reference to Theorem 2 for increased clarity.",
"* The optimized learning rate in 2.3 is not described.",
"This reduces reproducibility.",
"* Figure 4 misses the red trajectories, also it would be easier to have colors on the same (log?)-scale.",
"The text unfortunately does not explain why the loss function looks so vastly different with different look-ahead.",
"I would assume from the description that the colors are based on the final loss values obtaine dby choosing a fixed pair of decay exponent and effective LR.",
"Typos and notation: page 7 last paragraph: \"We train the all\" -> We train all notation page 5: i find .nabla_{.theta_i} confusing when .theta_i is a scalar, i would propose .frac{.partial}{.partial .theta_i} page 2: \"But this would come at the expense of long-term optimization process\": at this point of the paper it is not clear how or why this should happen.",
"Maybe add a sentence regarding the large/Small eigenvalues?"
] | {
"criticism": 4,
"example": 3,
"importance_and_relevance": 0,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 6,
"results_and_discussion": 5,
"suggestion_and_solution": 3,
"total": 12
} | 0.333333 | 0.25 | 0 | 0.416667 | 0.083333 | 0.5 | 0.416667 | 0.25 | 12 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.25 | 2.234218 | 0.015782 |
Representing dynamically: An active process for describing sequential data | We propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions. The method simultaneously acquires representations of input data and its dynamics. It is based on a hierarchical generative model composed of two levels. In the first level, a model learns representations to generate observed data. In the second level, representational states encode the dynamics of the lower one. The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models. The method actively explores the latent space guided by its knowledge and the uncertainty about it. That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space. So, no encoder or inference models are used since the generators also serve as their inverse transformations. The method is evaluated in two scenarios, with static images and with videos. The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders. With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed. | This paper, to me, is a clear rejection from these basic observations: A *model* is not a computation graph and should never be presented that way. Calling a computation graph a Bayesian network without even writing down how *inference* can ever result in such a computation graph is a basic error. The authors claim this: "So, no encoder or inference models are used since the generators also serve as their inverse transformations." Well, then this is not a Bayesian network. The authors spend a lot of time analyzing constant inputs theoretically, and I*m not sure why this is even relevant. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRepresenting dynamically: An active process for describing sequential data\nWe propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions. The method simultaneously acquires representations of input data and its dynamics. It is based on a hierarchical generative model composed of two levels. In the first level, a model learns representations to generate observed data. In the second level, representational states encode the dynamics of the lower one. The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models. The method actively explores the latent space guided by its knowledge and the uncertainty about it. That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space. So, no encoder or inference models are used since the generators also serve as their inverse transformations. The method is evaluated in two scenarios, with static images and with videos. The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders. With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper, to me, is a clear rejection from these basic observations: A *model* is not a computation graph and should never be presented that way. Calling a computation graph a Bayesian network without even writing down how *inference* can ever result in such a computation graph is a basic error. The authors claim this: \"So, no encoder or inference models are used since the generators also serve as their inverse transformations.\" Well, then this is not a Bayesian network. The authors spend a lot of time analyzing constant inputs theoretically, and I*m not sure why this is even relevant.",
"role": "assistant"
}
]
| [
" This paper, to me, is a clear rejection from these basic observations: A *model* is not a computation graph and should never be presented that way.",
"Calling a computation graph a Bayesian network without even writing down how *inference* can ever result in such a computation graph is a basic error.",
"The authors claim this: \"So, no encoder or inference models are used since the generators also serve as their inverse transformations.\"",
"Well, then this is not a Bayesian network.",
"The authors spend a lot of time analyzing constant inputs theoretically, and I*m not sure why this is even relevant."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 0,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 5
} | 0.6 | 0 | 0 | 0.8 | 0 | 0.2 | 0.4 | 0.2 | 5 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 2.2 | 1.189924 | 1.010076 |
Representing dynamically: An active process for describing sequential data | We propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions. The method simultaneously acquires representations of input data and its dynamics. It is based on a hierarchical generative model composed of two levels. In the first level, a model learns representations to generate observed data. In the second level, representational states encode the dynamics of the lower one. The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models. The method actively explores the latent space guided by its knowledge and the uncertainty about it. That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space. So, no encoder or inference models are used since the generators also serve as their inverse transformations. The method is evaluated in two scenarios, with static images and with videos. The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders. With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed. | The authors propose an architecture and generative model for static images and video sequences, with the purpose of generating an image that looks as similar as possible to the one that is supplied. This is useful for for example frame prediction in video and detection of changes in video as a consequence to changes in the dynamics of objects in the scene. The architecture minimizes the error between the generated image(s) and the supplied image(s) by refining the generated image over time when the same image is shown and by adapting when the image is changed. The model consists of three neural networks (F_Zµ, F_Zsigma, f_X|Z) and three multivariate Gaussian distributions P(S_t), P(Z_t) and P(X_t,Z_t) with diagonal covariances. The NNs do not change over time but they relate the three Gaussian distributions in different ways and these distributions are changing over time in order to minimize the error of the generated image(s). The paper took a while to understand due to its structure and how it is written. A short overview of the different components of Figure 1 giving the general idea and explaining * what nodes are stochastic variables and NNs * what is trained offline/online It would also help the structure if the links/arrows/nodes had numbers corresponding to the relevant equations defining the relations/computations. Some of these relations are defined with explicit equation numbers, others are baked into the text which makes it difficult to jump around in the paper when reading it and trying to understand the architecture. There are also numerous language errors in the paper and many of them are grammatical. For example: Page 5, second paragraph: "osculations" -> "oscillations" Page 5, fourth paragraph: "Defining .. is defined as.." The results seem impressive and the problem under consideration is important and have several applications. There is however not much in terms of discussion nor analysis of the two experiments. I find the contribution fairly significant but I lack some clarity in the presentation as well as in the experiments section. I do not find the paper clearly written. The presentation can be improved in several chapters, such as the introduction and the method section. The paper seem to be technically correct. I did not spot any errors. General comments: - Why is 2 a suitable size of S? - Why use two by two encoding instead of stride for the VAE baseline? How does this affect the experiments? - How is S_0 and the Z_0 prior set (initialized) in the experiments? - It would improve the readability of the paper, not least for a broader audience, if more details are added on how the VAE baseline architecture differ from the proposed architecture. - In the first experiment: -- How many iterations does it take for your method to beat VAE? -- What is the difference between the VAE basline and your approach that make VAE perform better than your approach initially (or even after a few iterations)? -- What affect does the momentum formulation have on the convergence rate (number of iterations necessary to reach VAE and your methods result at t=10)? - In the second experiment and Figure 5 in particular it was observed that some actions are clearly detected while others are not. It is mentioned that those that are not detected by the approach are more similar. In what sense are the actions more similar, which are the most prominent (from a humans perspective) such actions, what is making the model not detecting them and what can be done (within your approach) in order to improve or adjust the detection fidelity? Please add more time labels under the time axis in Figure 4 and Figure 5. Also please annotate the figures at the time points where the action transitions are according to the ground truth. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRepresenting dynamically: An active process for describing sequential data\nWe propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions. The method simultaneously acquires representations of input data and its dynamics. It is based on a hierarchical generative model composed of two levels. In the first level, a model learns representations to generate observed data. In the second level, representational states encode the dynamics of the lower one. The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models. The method actively explores the latent space guided by its knowledge and the uncertainty about it. That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space. So, no encoder or inference models are used since the generators also serve as their inverse transformations. The method is evaluated in two scenarios, with static images and with videos. The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders. With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors propose an architecture and generative model for static images and video sequences, with the purpose of generating an image that looks as similar as possible to the one that is supplied. This is useful for for example frame prediction in video and detection of changes in video as a consequence to changes in the dynamics of objects in the scene. The architecture minimizes the error between the generated image(s) and the supplied image(s) by refining the generated image over time when the same image is shown and by adapting when the image is changed. The model consists of three neural networks (F_Zµ, F_Zsigma, f_X|Z) and three multivariate Gaussian distributions P(S_t), P(Z_t) and P(X_t,Z_t) with diagonal covariances. The NNs do not change over time but they relate the three Gaussian distributions in different ways and these distributions are changing over time in order to minimize the error of the generated image(s). The paper took a while to understand due to its structure and how it is written. A short overview of the different components of Figure 1 giving the general idea and explaining * what nodes are stochastic variables and NNs * what is trained offline/online It would also help the structure if the links/arrows/nodes had numbers corresponding to the relevant equations defining the relations/computations. Some of these relations are defined with explicit equation numbers, others are baked into the text which makes it difficult to jump around in the paper when reading it and trying to understand the architecture. There are also numerous language errors in the paper and many of them are grammatical. For example: Page 5, second paragraph: \"osculations\" -> \"oscillations\" Page 5, fourth paragraph: \"Defining .. is defined as..\" The results seem impressive and the problem under consideration is important and have several applications. There is however not much in terms of discussion nor analysis of the two experiments. I find the contribution fairly significant but I lack some clarity in the presentation as well as in the experiments section. I do not find the paper clearly written. The presentation can be improved in several chapters, such as the introduction and the method section. The paper seem to be technically correct. I did not spot any errors. General comments: - Why is 2 a suitable size of S? - Why use two by two encoding instead of stride for the VAE baseline? How does this affect the experiments? - How is S_0 and the Z_0 prior set (initialized) in the experiments? - It would improve the readability of the paper, not least for a broader audience, if more details are added on how the VAE baseline architecture differ from the proposed architecture. - In the first experiment: -- How many iterations does it take for your method to beat VAE? -- What is the difference between the VAE basline and your approach that make VAE perform better than your approach initially (or even after a few iterations)? -- What affect does the momentum formulation have on the convergence rate (number of iterations necessary to reach VAE and your methods result at t=10)? - In the second experiment and Figure 5 in particular it was observed that some actions are clearly detected while others are not. It is mentioned that those that are not detected by the approach are more similar. In what sense are the actions more similar, which are the most prominent (from a humans perspective) such actions, what is making the model not detecting them and what can be done (within your approach) in order to improve or adjust the detection fidelity? Please add more time labels under the time axis in Figure 4 and Figure 5. Also please annotate the figures at the time points where the action transitions are according to the ground truth.",
"role": "assistant"
}
]
| [
" The authors propose an architecture and generative model for static images and video sequences, with the purpose of generating an image that looks as similar as possible to the one that is supplied.",
"This is useful for for example frame prediction in video and detection of changes in video as a consequence to changes in the dynamics of objects in the scene.",
"The architecture minimizes the error between the generated image(s) and the supplied image(s) by refining the generated image over time when the same image is shown and by adapting when the image is changed.",
"The model consists of three neural networks (F_Zµ, F_Zsigma, f_X|Z) and three multivariate Gaussian distributions P(S_t), P(Z_t) and P(X_t,Z_t) with diagonal covariances.",
"The NNs do not change over time but they relate the three Gaussian distributions in different ways and these distributions are changing over time in order to minimize the error of the generated image(s).",
"The paper took a while to understand due to its structure and how it is written.",
"A short overview of the different components of Figure 1 giving the general idea and explaining * what nodes are stochastic variables and NNs * what is trained offline/online It would also help the structure if the links/arrows/nodes had numbers corresponding to the relevant equations defining the relations/computations.",
"Some of these relations are defined with explicit equation numbers, others are baked into the text which makes it difficult to jump around in the paper when reading it and trying to understand the architecture.",
"There are also numerous language errors in the paper and many of them are grammatical.",
"For example: Page 5, second paragraph: \"osculations\" -> \"oscillations\" Page 5, fourth paragraph: \"Defining .. is defined as..\" The results seem impressive and the problem under consideration is important and have several applications.",
"There is however not much in terms of discussion nor analysis of the two experiments.",
"I find the contribution fairly significant but I lack some clarity in the presentation as well as in the experiments section.",
"I do not find the paper clearly written.",
"The presentation can be improved in several chapters, such as the introduction and the method section.",
"The paper seem to be technically correct.",
"I did not spot any errors.",
"General comments: - Why is 2 a suitable size of S?",
"- Why use two by two encoding instead of stride for the VAE baseline?",
"How does this affect the experiments?",
"- How is S_0 and the Z_0 prior set (initialized) in the experiments?",
"- It would improve the readability of the paper, not least for a broader audience, if more details are added on how the VAE baseline architecture differ from the proposed architecture.",
"- In the first experiment: -- How many iterations does it take for your method to beat VAE?",
"-- What is the difference between the VAE basline and your approach that make VAE perform better than your approach initially (or even after a few iterations)?",
"-- What affect does the momentum formulation have on the convergence rate (number of iterations necessary to reach VAE and your methods result at t=10)?",
"- In the second experiment and Figure 5 in particular it was observed that some actions are clearly detected while others are not.",
"It is mentioned that those that are not detected by the approach are more similar.",
"In what sense are the actions more similar, which are the most prominent (from a humans perspective) such actions, what is making the model not detecting them and what can be done (within your approach) in order to improve or adjust the detection fidelity?",
"Please add more time labels under the time axis in Figure 4 and Figure 5.",
"Also please annotate the figures at the time points where the action transitions are according to the ground truth."
] | {
"criticism": 6,
"example": 4,
"importance_and_relevance": 2,
"materials_and_methods": 19,
"praise": 3,
"presentation_and_reporting": 14,
"results_and_discussion": 8,
"suggestion_and_solution": 6,
"total": 29
} | 0.206897 | 0.137931 | 0.068966 | 0.655172 | 0.103448 | 0.482759 | 0.275862 | 0.206897 | 29 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.137931 | -1.902371 | 4.040302 |
An efficient framework for learning sentence representations | In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform state-of-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time. | This paper proposes a framework for unsupervised learning of sentence representations by maximizing a model of the probability of true context sentences relative to random candidate sentences. Unique aspects of this skip-gram style model include separate target- and context-sentence encoders, as well as a dot-product similarity measure between representations. A battery of experiments indicate that the learned representations have comparable or better performance compared to other, more computationally-intensive models. While the main constituent ideas of this paper are not entirely novel, I think the specific combination of tools has not been explored previously. As such, the novelty of this paper rests in the specific modeling choices and the significance hinges on the good empirical results. For this reason, I believe it is important that additional details regarding the specific architecture and training details be included in the paper. For example, how many layers is the GRU? What type of parameter initialization is used? Releasing source code would help answer these and other questions, but including more details in the paper itself would also be welcome. Regarding the empirical results, the method does appear to achieve good performance, especially given the compute time. However, the balance between performance and computational complexity is not investigated, and I think such an analysis would add significant value to the paper. For example, I see at least three ways in which performance could be improved at the expense of additional computation: 1) increasing the candidate pool size 2) increasing the corpus size and 3) increasing the embedding size / increasing the encoder capacity. Does the good performance/efficiency reported in the paper depend on achieving a sweet spot among those three hyperparameters? Overall, the novelty of this paper is fairly low and there is still substantial room for improvement in some of the analysis. On the other hand, I think this paper proposes an intuitive model and demonstrates good performance. I am on the fence, but ultimately I vote to accept this paper for publication. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn efficient framework for learning sentence representations\nIn this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform state-of-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a framework for unsupervised learning of sentence representations by maximizing a model of the probability of true context sentences relative to random candidate sentences. Unique aspects of this skip-gram style model include separate target- and context-sentence encoders, as well as a dot-product similarity measure between representations. A battery of experiments indicate that the learned representations have comparable or better performance compared to other, more computationally-intensive models. While the main constituent ideas of this paper are not entirely novel, I think the specific combination of tools has not been explored previously. As such, the novelty of this paper rests in the specific modeling choices and the significance hinges on the good empirical results. For this reason, I believe it is important that additional details regarding the specific architecture and training details be included in the paper. For example, how many layers is the GRU? What type of parameter initialization is used? Releasing source code would help answer these and other questions, but including more details in the paper itself would also be welcome. Regarding the empirical results, the method does appear to achieve good performance, especially given the compute time. However, the balance between performance and computational complexity is not investigated, and I think such an analysis would add significant value to the paper. For example, I see at least three ways in which performance could be improved at the expense of additional computation: 1) increasing the candidate pool size 2) increasing the corpus size and 3) increasing the embedding size / increasing the encoder capacity. Does the good performance/efficiency reported in the paper depend on achieving a sweet spot among those three hyperparameters? Overall, the novelty of this paper is fairly low and there is still substantial room for improvement in some of the analysis. On the other hand, I think this paper proposes an intuitive model and demonstrates good performance. I am on the fence, but ultimately I vote to accept this paper for publication.",
"role": "assistant"
}
]
| [
" This paper proposes a framework for unsupervised learning of sentence representations by maximizing a model of the probability of true context sentences relative to random candidate sentences.",
"Unique aspects of this skip-gram style model include separate target- and context-sentence encoders, as well as a dot-product similarity measure between representations.",
"A battery of experiments indicate that the learned representations have comparable or better performance compared to other, more computationally-intensive models.",
"While the main constituent ideas of this paper are not entirely novel, I think the specific combination of tools has not been explored previously.",
"As such, the novelty of this paper rests in the specific modeling choices and the significance hinges on the good empirical results.",
"For this reason, I believe it is important that additional details regarding the specific architecture and training details be included in the paper.",
"For example, how many layers is the GRU?",
"What type of parameter initialization is used?",
"Releasing source code would help answer these and other questions, but including more details in the paper itself would also be welcome.",
"Regarding the empirical results, the method does appear to achieve good performance, especially given the compute time.",
"However, the balance between performance and computational complexity is not investigated, and I think such an analysis would add significant value to the paper.",
"For example, I see at least three ways in which performance could be improved at the expense of additional computation: 1) increasing the candidate pool size 2) increasing the corpus size and 3) increasing the embedding size / increasing the encoder capacity.",
"Does the good performance/efficiency reported in the paper depend on achieving a sweet spot among those three hyperparameters?",
"Overall, the novelty of this paper is fairly low and there is still substantial room for improvement in some of the analysis.",
"On the other hand, I think this paper proposes an intuitive model and demonstrates good performance.",
"I am on the fence, but ultimately I vote to accept this paper for publication."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 14,
"praise": 5,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 4,
"total": 16
} | 0.1875 | 0 | 0.1875 | 0.875 | 0.3125 | 0.1875 | 0.1875 | 0.25 | 16 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.1875 | 2.045458 | 0.142042 |
Towards better understanding of gradient-based attribution methods for Deep Neural Networks | Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures. | This paper discusses several gradient based attribution methods, which have been popular for the fast computation of saliency maps for interpreting deep neural networks. The paper provides several advances: - .epsilon-LRP and DeepLIFT are formulated in a way that can be calculated using the same back-propagation as training. - This gives a more unified way of understanding, and implementing the methods. - The paper points out situations when the methods are equivalent - The paper analyses the methods* sensitivity to identifying single and joint regions of sensitivity - The paper proposes a new objective function to measure joint sensitivity Overall, I believe this paper to be a useful contribution to the literature. It both solidifies understanding of existing methods and provides new insight into quantitate ways of analysing methods. Especially the latter will be appreciated. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTowards better understanding of gradient-based attribution methods for Deep Neural Networks\nUnderstanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper discusses several gradient based attribution methods, which have been popular for the fast computation of saliency maps for interpreting deep neural networks. The paper provides several advances: - .epsilon-LRP and DeepLIFT are formulated in a way that can be calculated using the same back-propagation as training. - This gives a more unified way of understanding, and implementing the methods. - The paper points out situations when the methods are equivalent - The paper analyses the methods* sensitivity to identifying single and joint regions of sensitivity - The paper proposes a new objective function to measure joint sensitivity Overall, I believe this paper to be a useful contribution to the literature. It both solidifies understanding of existing methods and provides new insight into quantitate ways of analysing methods. Especially the latter will be appreciated.",
"role": "assistant"
}
]
| [
" This paper discusses several gradient based attribution methods, which have been popular for the fast computation of saliency maps for interpreting deep neural networks.",
"The paper provides several advances: - .epsilon-LRP and DeepLIFT are formulated in a way that can be calculated using the same back-propagation as training.",
"- This gives a more unified way of understanding, and implementing the methods.",
"- The paper points out situations when the methods are equivalent - The paper analyses the methods* sensitivity to identifying single and joint regions of sensitivity - The paper proposes a new objective function to measure joint sensitivity Overall, I believe this paper to be a useful contribution to the literature.",
"It both solidifies understanding of existing methods and provides new insight into quantitate ways of analysing methods.",
"Especially the latter will be appreciated."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 5,
"praise": 3,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 6
} | 0 | 0 | 0.666667 | 0.833333 | 0.5 | 0 | 0.166667 | 0 | 6 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 2.166667 | 1.393328 | 0.773339 |
Breaking the Softmax Bottleneck: A High-Rank RNN Language Model | We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively. The proposed method also excels on the large-scale 1B Word dataset, outperforming the baseline by over 5.6 points in perplexity. | The authors argue in this paper that due to the limited rank of the context-to-vocabulary logit matrix in the currently used version of the softmax output layer, it is not able to capture the full complexity of language. As a result, they propose to use a mixture of softmax output layers instead where the mixing probabilities are context-dependent, which allows to obtain a full rank logit matrix in complexity linear in the number of mixture components (here 15). This leads to improvements in the word-level perplexities of the PTB and wikitext2 data sets, and Switchboard BLEU scores. The question of the expressiveness of the softmax layer, as well as its suitability for word-level prediction, is indeed an important one which has received too little attention. This makes a lot of the questions asked in this paper extremely relevant to the field. However, it is unclear that the rank of the logit matrix is the right quantity to consider. For example, it is easy to describe a rank D NxM matrix where up to 2^D lines have max values at different indices. Further, the first two "observations" in Section 2.2 would be more accurately described as "intuitions" of the authors. As they write themselves "there is no evidence showing that semantic meanings are fully linearly correlated." Why then try to link "meanings" to basis vectors for the rows of A? To be clear, the proposed model is undoubtedly more expressive than a regular softmax, and although it does come at a substantial computational cost (a back-of-the envelope calculation tells us that computing 15 components of 280d MoS takes the same number of operations as one with dimension 1084 = sqrt (280*280*15)), it apparently manages not to drastically increase overfitting, which is significant. Unfortunately, this is only tested on relatively small data sets, up to 2M tokens and a vocabulary of size 30K for language modeling. They do constitute a good starting place to test a model, but given the importance of regularization on those specific tasks, it is difficult to predict how the MoS would behave if more training data were available, and if one could e.g. simply try a 1084 dimension embedding for the softmax without having to worry about overfitting. Another important missing experiment would consist in varying the number of mixture components (this could very well be done on WikiText2). This could help validate the hypothesis: how does the estimated rank vary with the number of components? How about the performance and pairwise KL divergence? This paper offers a promising direction for language modeling research, but would require more justification, or at least a more developed experimental section. Pros: - Important starting question - Thought-provoking approach - Experimental gains on small data sets Cons: - The link between the intuition and reality of the gains is not obvious - Experiments limited to small data sets, some obvious questions remain | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBreaking the Softmax Bottleneck: A High-Rank RNN Language Model\nWe formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively. The proposed method also excels on the large-scale 1B Word dataset, outperforming the baseline by over 5.6 points in perplexity.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors argue in this paper that due to the limited rank of the context-to-vocabulary logit matrix in the currently used version of the softmax output layer, it is not able to capture the full complexity of language. As a result, they propose to use a mixture of softmax output layers instead where the mixing probabilities are context-dependent, which allows to obtain a full rank logit matrix in complexity linear in the number of mixture components (here 15). This leads to improvements in the word-level perplexities of the PTB and wikitext2 data sets, and Switchboard BLEU scores. The question of the expressiveness of the softmax layer, as well as its suitability for word-level prediction, is indeed an important one which has received too little attention. This makes a lot of the questions asked in this paper extremely relevant to the field. However, it is unclear that the rank of the logit matrix is the right quantity to consider. For example, it is easy to describe a rank D NxM matrix where up to 2^D lines have max values at different indices. Further, the first two \"observations\" in Section 2.2 would be more accurately described as \"intuitions\" of the authors. As they write themselves \"there is no evidence showing that semantic meanings are fully linearly correlated.\" Why then try to link \"meanings\" to basis vectors for the rows of A? To be clear, the proposed model is undoubtedly more expressive than a regular softmax, and although it does come at a substantial computational cost (a back-of-the envelope calculation tells us that computing 15 components of 280d MoS takes the same number of operations as one with dimension 1084 = sqrt (280*280*15)), it apparently manages not to drastically increase overfitting, which is significant. Unfortunately, this is only tested on relatively small data sets, up to 2M tokens and a vocabulary of size 30K for language modeling. They do constitute a good starting place to test a model, but given the importance of regularization on those specific tasks, it is difficult to predict how the MoS would behave if more training data were available, and if one could e.g. simply try a 1084 dimension embedding for the softmax without having to worry about overfitting. Another important missing experiment would consist in varying the number of mixture components (this could very well be done on WikiText2). This could help validate the hypothesis: how does the estimated rank vary with the number of components? How about the performance and pairwise KL divergence? This paper offers a promising direction for language modeling research, but would require more justification, or at least a more developed experimental section. Pros: - Important starting question - Thought-provoking approach - Experimental gains on small data sets Cons: - The link between the intuition and reality of the gains is not obvious - Experiments limited to small data sets, some obvious questions remain",
"role": "assistant"
}
]
| [
" The authors argue in this paper that due to the limited rank of the context-to-vocabulary logit matrix in the currently used version of the softmax output layer, it is not able to capture the full complexity of language.",
"As a result, they propose to use a mixture of softmax output layers instead where the mixing probabilities are context-dependent, which allows to obtain a full rank logit matrix in complexity linear in the number of mixture components (here 15).",
"This leads to improvements in the word-level perplexities of the PTB and wikitext2 data sets, and Switchboard BLEU scores.",
"The question of the expressiveness of the softmax layer, as well as its suitability for word-level prediction, is indeed an important one which has received too little attention.",
"This makes a lot of the questions asked in this paper extremely relevant to the field.",
"However, it is unclear that the rank of the logit matrix is the right quantity to consider.",
"For example, it is easy to describe a rank D NxM matrix where up to 2^D lines have max values at different indices.",
"Further, the first two \"observations\" in Section 2.2 would be more accurately described as \"intuitions\" of the authors.",
"As they write themselves \"there is no evidence showing that semantic meanings are fully linearly correlated.\"",
"Why then try to link \"meanings\" to basis vectors for the rows of A?",
"To be clear, the proposed model is undoubtedly more expressive than a regular softmax, and although it does come at a substantial computational cost (a back-of-the envelope calculation tells us that computing 15 components of 280d MoS takes the same number of operations as one with dimension 1084 = sqrt (280*280*15)), it apparently manages not to drastically increase overfitting, which is significant.",
"Unfortunately, this is only tested on relatively small data sets, up to 2M tokens and a vocabulary of size 30K for language modeling.",
"They do constitute a good starting place to test a model, but given the importance of regularization on those specific tasks, it is difficult to predict how the MoS would behave if more training data were available, and if one could e.g.",
"simply try a 1084 dimension embedding for the softmax without having to worry about overfitting.",
"Another important missing experiment would consist in varying the number of mixture components (this could very well be done on WikiText2).",
"This could help validate the hypothesis: how does the estimated rank vary with the number of components?",
"How about the performance and pairwise KL divergence?",
"This paper offers a promising direction for language modeling research, but would require more justification, or at least a more developed experimental section.",
"Pros: - Important starting question - Thought-provoking approach - Experimental gains on small data sets Cons: - The link between the intuition and reality of the gains is not obvious - Experiments limited to small data sets, some obvious questions remain"
] | {
"criticism": 5,
"example": 4,
"importance_and_relevance": 2,
"materials_and_methods": 11,
"praise": 2,
"presentation_and_reporting": 10,
"results_and_discussion": 4,
"suggestion_and_solution": 7,
"total": 19
} | 0.263158 | 0.210526 | 0.105263 | 0.578947 | 0.105263 | 0.526316 | 0.210526 | 0.368421 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.368421 | 1.800254 | 0.568167 |
Distributed non-parametric deep and wide networks | In recent work, it was shown that combining multi-kernel based support vector machines (SVMs) can lead to near state-of-the-art performance on an action recognition dataset (HMDB-51 dataset). In the present work, we show that combining distributed Gaussian Processes with multi-stream deep convolutional neural networks (CNN) alleviate the need to augment a neural network with hand-crafted features. In contrast to prior work, we treat each deep neural convolutional network as an expert wherein the individual predictions (and their respective uncertainties) are combined into a Product of Experts (PoE) framework. | - The paper is fairly written and it is clear what is being done - There is not much novelty in the paper; it combines known techniques and is a systems paper, so I would judge the contributions mainly in terms of the empirical results and messsage conveyed (see third point) - The paper builds on a previous paper (ICCV Workshops, https://arxiv.org/pdf/1707.06923.pdf), however, there is non-trivial overlap between the two papers, e.g. Fig. 1 seems to be almost the same figure from that paper, Sec 2.1 from the previous paper is largely copied - The message from the empirical validation is also not novel, in the ICCVW paper it was shown that the combination of different modalities etc. using a multiple kernel learning framework improved results (73.3 on HMDB51), while in the current paper the same message comes across with another kind of (known) method for combining different classifiers and modality (without iDT their best results are 73.6 for CNN+GP-PoE) | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDistributed non-parametric deep and wide networks\nIn recent work, it was shown that combining multi-kernel based support vector machines (SVMs) can lead to near state-of-the-art performance on an action recognition dataset (HMDB-51 dataset). In the present work, we show that combining distributed Gaussian Processes with multi-stream deep convolutional neural networks (CNN) alleviate the need to augment a neural network with hand-crafted features. In contrast to prior work, we treat each deep neural convolutional network as an expert wherein the individual predictions (and their respective uncertainties) are combined into a Product of Experts (PoE) framework.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n - The paper is fairly written and it is clear what is being done - There is not much novelty in the paper; it combines known techniques and is a systems paper, so I would judge the contributions mainly in terms of the empirical results and messsage conveyed (see third point) - The paper builds on a previous paper (ICCV Workshops, https://arxiv.org/pdf/1707.06923.pdf), however, there is non-trivial overlap between the two papers, e.g. Fig. 1 seems to be almost the same figure from that paper, Sec 2.1 from the previous paper is largely copied - The message from the empirical validation is also not novel, in the ICCVW paper it was shown that the combination of different modalities etc. using a multiple kernel learning framework improved results (73.3 on HMDB51), while in the current paper the same message comes across with another kind of (known) method for combining different classifiers and modality (without iDT their best results are 73.6 for CNN+GP-PoE)",
"role": "assistant"
}
]
| [
" - The paper is fairly written and it is clear what is being done - There is not much novelty in the paper; it combines known techniques and is a systems paper, so I would judge the contributions mainly in terms of the empirical results and messsage conveyed (see third point) - The paper builds on a previous paper (ICCV Workshops, https://arxiv.org/pdf/1707.06923.pdf), however, there is non-trivial overlap between the two papers, e.g.",
"Fig.",
"1 seems to be almost the same figure from that paper, Sec 2.1 from the previous paper is largely copied - The message from the empirical validation is also not novel, in the ICCVW paper it was shown that the combination of different modalities etc.",
"using a multiple kernel learning framework improved results (73.3 on HMDB51), while in the current paper the same message comes across with another kind of (known) method for combining different classifiers and modality (without iDT their best results are 73.6 for CNN+GP-PoE)"
] | {
"criticism": 2,
"example": 2,
"importance_and_relevance": 0,
"materials_and_methods": 3,
"praise": 0,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 4
} | 0.5 | 0.5 | 0 | 0.75 | 0 | 0.25 | 0.5 | 0 | 4 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 2.5 | 1.221623 | 1.278377 |
Word2net: Deep Representations of Language | Word embeddings extract semantic features of words from large datasets of text. Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words. Here we propose word2net, a method that replaces their linear parametrization with neural networks. For each term in the vocabulary, word2net posits a neural network that takes the context as input and outputs a probability of occurrence. Further, word2net can use the hierarchical organization of its word networks to incorporate additional meta-data, such as syntactic features, into the embedding model. For example, we show how to share parameters across word networks to develop an embedding model that includes part-of-speech information. We study word2net with two datasets, a collection of Wikipedia articles and a corpus of U.S. Senate speeches. Quantitatively, we found that word2net outperforms popular embedding methods on predicting held- out words and that sharing parameters based on part of speech further boosts performance. Qualitatively, word2net learns interpretable semantic representations and, compared to vector-based methods, better incorporates syntactic information. | This paper presents another variant on neural language models used to learn word embeddings. In keeping with the formulation of Mikolov et al, the model learned is a set of independent binary classifiers, one per word. As opposed to other work, each classifier is not based on the dot product between an embedding vector and a context vector but instead is a per-word neural network which takes the context as input and produces a score for each term. An interesting consequence of using networks instead of vectors to parametrize the embeddings is that it*s easy to see many ways to let the model use side information such as part-of-speech tags. The paper explores one such way, by sharing parameters across networks of all words which have the same POS tag (effectively having different parameterizations for words which occur with multiple POS tags). The idea is interesting but the evaluation leaves doubts. Here are my main problems: 1. The quantitative likelihood-based evaluation can easily be gamed by making all classifiers output numbers which are close to 1. This is because the model is not normalized, and no attempt at normalization is claimed to be made during the likelihood evaluation. This means it*s likely hyperparameter tuning (of, say, how many negative examples to use per positive example) is likely to bias this evaluation to look more positive than it should. 2. The qualitative similarity-based evaluation notes, correctly, that the standard metric of dot product / cosine between word embeddings does not work in the case of networks, and instead measures similarity by looking at the similarity of the predictions of the networks. Then all networks are ranked by similarity to a query network to make the now-standard similar word lists. While this approach is interesting, the baseline models were evaluated using the plain dot product. It*s unclear whether this new evaluation methodology would have also produced nicer word lists for the baseline methods. In the light that the evaluation has these two issues I do not recommend accepting this paper. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWord2net: Deep Representations of Language\nWord embeddings extract semantic features of words from large datasets of text. Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words. Here we propose word2net, a method that replaces their linear parametrization with neural networks. For each term in the vocabulary, word2net posits a neural network that takes the context as input and outputs a probability of occurrence. Further, word2net can use the hierarchical organization of its word networks to incorporate additional meta-data, such as syntactic features, into the embedding model. For example, we show how to share parameters across word networks to develop an embedding model that includes part-of-speech information. We study word2net with two datasets, a collection of Wikipedia articles and a corpus of U.S. Senate speeches. Quantitatively, we found that word2net outperforms popular embedding methods on predicting held- out words and that sharing parameters based on part of speech further boosts performance. Qualitatively, word2net learns interpretable semantic representations and, compared to vector-based methods, better incorporates syntactic information.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents another variant on neural language models used to learn word embeddings. In keeping with the formulation of Mikolov et al, the model learned is a set of independent binary classifiers, one per word. As opposed to other work, each classifier is not based on the dot product between an embedding vector and a context vector but instead is a per-word neural network which takes the context as input and produces a score for each term. An interesting consequence of using networks instead of vectors to parametrize the embeddings is that it*s easy to see many ways to let the model use side information such as part-of-speech tags. The paper explores one such way, by sharing parameters across networks of all words which have the same POS tag (effectively having different parameterizations for words which occur with multiple POS tags). The idea is interesting but the evaluation leaves doubts. Here are my main problems: 1. The quantitative likelihood-based evaluation can easily be gamed by making all classifiers output numbers which are close to 1. This is because the model is not normalized, and no attempt at normalization is claimed to be made during the likelihood evaluation. This means it*s likely hyperparameter tuning (of, say, how many negative examples to use per positive example) is likely to bias this evaluation to look more positive than it should. 2. The qualitative similarity-based evaluation notes, correctly, that the standard metric of dot product / cosine between word embeddings does not work in the case of networks, and instead measures similarity by looking at the similarity of the predictions of the networks. Then all networks are ranked by similarity to a query network to make the now-standard similar word lists. While this approach is interesting, the baseline models were evaluated using the plain dot product. It*s unclear whether this new evaluation methodology would have also produced nicer word lists for the baseline methods. In the light that the evaluation has these two issues I do not recommend accepting this paper.",
"role": "assistant"
}
]
| [
" This paper presents another variant on neural language models used to learn word embeddings.",
"In keeping with the formulation of Mikolov et al, the model learned is a set of independent binary classifiers, one per word.",
"As opposed to other work, each classifier is not based on the dot product between an embedding vector and a context vector but instead is a per-word neural network which takes the context as input and produces a score for each term.",
"An interesting consequence of using networks instead of vectors to parametrize the embeddings is that it*s easy to see many ways to let the model use side information such as part-of-speech tags.",
"The paper explores one such way, by sharing parameters across networks of all words which have the same POS tag (effectively having different parameterizations for words which occur with multiple POS tags).",
"The idea is interesting but the evaluation leaves doubts.",
"Here are my main problems: 1.",
"The quantitative likelihood-based evaluation can easily be gamed by making all classifiers output numbers which are close to 1.",
"This is because the model is not normalized, and no attempt at normalization is claimed to be made during the likelihood evaluation.",
"This means it*s likely hyperparameter tuning (of, say, how many negative examples to use per positive example) is likely to bias this evaluation to look more positive than it should.",
"2.",
"The qualitative similarity-based evaluation notes, correctly, that the standard metric of dot product / cosine between word embeddings does not work in the case of networks, and instead measures similarity by looking at the similarity of the predictions of the networks.",
"Then all networks are ranked by similarity to a query network to make the now-standard similar word lists.",
"While this approach is interesting, the baseline models were evaluated using the plain dot product.",
"It*s unclear whether this new evaluation methodology would have also produced nicer word lists for the baseline methods.",
"In the light that the evaluation has these two issues I do not recommend accepting this paper."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 14,
"praise": 3,
"presentation_and_reporting": 7,
"results_and_discussion": 5,
"suggestion_and_solution": 1,
"total": 16
} | 0.25 | 0 | 0.125 | 0.875 | 0.1875 | 0.4375 | 0.3125 | 0.0625 | 16 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 2.107958 | 0.142042 |
Lung Tumor Location and Identification with AlexNet and a Custom CNN | Lung cancer is the leading cause of cancer deaths in the world and early detection is a crucial part of increasing patient survival. Deep learning techniques provide us with a method of automated analysis of patient scans. In this work, we compare AlexNet, a multi-layered and highly ?exible architecture, with a custom CNN to determine if lung nodules with patient scans are benign or cancerous. We have found our CNN architecture to be highly accurate (99.79%) and fast while maintaining low False Positive and False Negative rates (< 0.01% and 0.15% respectively). This is important as high false positive rates are a serious issue with lung cancer diagnosis. We have found that AlexNet is not well suited to the problem of nodule identi?cation, though it is a good baseline comparison because of its ?exibility. | The authors compare a standard DL machine (AlexNet) with a custom CNN-based solution in the well known tasks of classifying lung tumours into benign or cancerous in the Luna CT scan dataset, concluding that the proposed novel solution performs better. The paper is interesting, but it has a number of issues that prevents it from being accepted for the ICLR conference. First, the scope of the paper, in its present form, is very limited: the idea of comparing the novel solution just with AlexNet is not adding much to the present landscape of methods to tackle this problem. Moreover, although the task is very well known and in the last few year gave rise to a steady flow of solutions and was also the topic of a famous Kaggle competition, no discussion about that can be found in the manuscript. The novel solution is very briefly sketched, and some of the tricks in its architecture are not properly justified: moreover, the performance improvement w.r.t . to AlexNet is hardly supporting the claim. Experimental setup consists of just a single training/test split, thus no confidence intervals on the results can be defined to show the stability of the solution. The whole sections 2.3 and 2.4 include only standard material unnecessary to mention given the target venue, and the references are limited and incomplete. This given, I rate this manuscript as not suitable for ICLR 2018. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLung Tumor Location and Identification with AlexNet and a Custom CNN\nLung cancer is the leading cause of cancer deaths in the world and early detection is a crucial part of increasing patient survival. Deep learning techniques provide us with a method of automated analysis of patient scans. In this work, we compare AlexNet, a multi-layered and highly ?exible architecture, with a custom CNN to determine if lung nodules with patient scans are benign or cancerous. We have found our CNN architecture to be highly accurate (99.79%) and fast while maintaining low False Positive and False Negative rates (< 0.01% and 0.15% respectively). This is important as high false positive rates are a serious issue with lung cancer diagnosis. We have found that AlexNet is not well suited to the problem of nodule identi?cation, though it is a good baseline comparison because of its ?exibility.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors compare a standard DL machine (AlexNet) with a custom CNN-based solution in the well known tasks of classifying lung tumours into benign or cancerous in the Luna CT scan dataset, concluding that the proposed novel solution performs better. The paper is interesting, but it has a number of issues that prevents it from being accepted for the ICLR conference. First, the scope of the paper, in its present form, is very limited: the idea of comparing the novel solution just with AlexNet is not adding much to the present landscape of methods to tackle this problem. Moreover, although the task is very well known and in the last few year gave rise to a steady flow of solutions and was also the topic of a famous Kaggle competition, no discussion about that can be found in the manuscript. The novel solution is very briefly sketched, and some of the tricks in its architecture are not properly justified: moreover, the performance improvement w.r.t . to AlexNet is hardly supporting the claim. Experimental setup consists of just a single training/test split, thus no confidence intervals on the results can be defined to show the stability of the solution. The whole sections 2.3 and 2.4 include only standard material unnecessary to mention given the target venue, and the references are limited and incomplete. This given, I rate this manuscript as not suitable for ICLR 2018.",
"role": "assistant"
}
]
| [
" The authors compare a standard DL machine (AlexNet) with a custom CNN-based solution in the well known tasks of classifying lung tumours into benign or cancerous in the Luna CT scan dataset, concluding that the proposed novel solution performs better.",
"The paper is interesting, but it has a number of issues that prevents it from being accepted for the ICLR conference.",
"First, the scope of the paper, in its present form, is very limited: the idea of comparing the novel solution just with AlexNet is not adding much to the present landscape of methods to tackle this problem.",
"Moreover, although the task is very well known and in the last few year gave rise to a steady flow of solutions and was also the topic of a famous Kaggle competition, no discussion about that can be found in the manuscript.",
"The novel solution is very briefly sketched, and some of the tricks in its architecture are not properly justified: moreover, the performance improvement w.r.t .",
"to AlexNet is hardly supporting the claim.",
"Experimental setup consists of just a single training/test split, thus no confidence intervals on the results can be defined to show the stability of the solution.",
"The whole sections 2.3 and 2.4 include only standard material unnecessary to mention given the target venue, and the references are limited and incomplete.",
"This given, I rate this manuscript as not suitable for ICLR 2018."
] | {
"criticism": 7,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 4,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 9
} | 0.777778 | 0 | 0.333333 | 0.444444 | 0.222222 | 0.111111 | 0.222222 | 0.111111 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.222222 | 1.969703 | 0.252519 |
Compact Encoding of Words for Efficient Character-level Convolutional Neural Networks Text Classification | This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters. This representation is language-independent with no need of pretraining and produces an encoding with no information loss. It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset. Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries. As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks (CNNs) for text classification at character-level. We apply two variants of CNN coupled with it. Experimental results show that it drastically reduces the number of parameters to be optimized, resulting in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware. | This paper proposes a new character encoding scheme for use with character-convolutional language models. This is a poor quality paper, is unclear in the results (what metric is even reported in Table 6), and has little significance (though this may highlight the opportunity to revisit the encoding scheme for characters). | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCompact Encoding of Words for Efficient Character-level Convolutional Neural Networks Text Classification\nThis paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters. This representation is language-independent with no need of pretraining and produces an encoding with no information loss. It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset. Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries. As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks (CNNs) for text classification at character-level. We apply two variants of CNN coupled with it. Experimental results show that it drastically reduces the number of parameters to be optimized, resulting in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a new character encoding scheme for use with character-convolutional language models. This is a poor quality paper, is unclear in the results (what metric is even reported in Table 6), and has little significance (though this may highlight the opportunity to revisit the encoding scheme for characters).",
"role": "assistant"
}
]
| [
" This paper proposes a new character encoding scheme for use with character-convolutional language models.",
"This is a poor quality paper, is unclear in the results (what metric is even reported in Table 6), and has little significance (though this may highlight the opportunity to revisit the encoding scheme for characters)."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 0,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 2
} | 0.5 | 0 | 0.5 | 1 | 0 | 1 | 0.5 | 0.5 | 2 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 4 | 2.090326 | 1.909674 |
Revisiting Knowledge Base Embedding as Tensor Decomposition | We study the problem of knowledge base (KB) embedding, which is usually addressed through two frameworks---neural KB embedding and tensor decomposition. In this work, we theoretically analyze the neural embedding framework and subsequently connect it with tensor based embedding. Specifically, we show that in neural KB embedding the two commonly adopted optimization solutions---margin-based and negative sampling losses---are closely related to each other. We also reach the closed-form tensor that is implicitly approximated by popular neural KB approaches, revealing the underlying connection between neural and tensor based KB embedding models. Grounded in the theoretical results, we further present a tensor decomposition based framework KBTD to directly approximate the derived closed form tensor. Under this framework, the neural KB embedding models, such as NTN, TransE, Bilinear, and DISTMULT, are unified into a general tensor optimization architecture. Finally, we conduct experiments on the link prediction task in WordNet and Freebase, empirically demonstrating the effectiveness of the KBTD framework. | The paper proposes a new method to train knowledge base embeddings using a least-squares loss. For this purpose, the paper introduces a reweighting scheme of the entries in the original adjacency tensor. The reweighting is derived from an analysis of the cross-entropy loss. In addition, the paper discusses the connections of the margin and cross-entropy loss and evaluates the proposed method on WN18 and FB15k. The paper tackles an interesting problem, as learning from knowledge bases via embedding methods has become increasingly important for tasks such as question answering. Providing additional insight into current methods can be an important contribution to advance the state-of-the-art. However, I*m concerned about several aspects in the current form of the paper. For instance, the derivation in Section 4 is unclear to me, as eq.4 suddenly introduces a weighted sum over expectations using the degrees of nodes. The derivation also seems to rely on a very specific negative sampling assumption (uniform sampling without checking whether the corrupted triple is a true negative). This sampling method isn*t used consistently across models and also brings its own problems, e.g., see the LCWA discussion in [4] In addition, the semantics that are introduced by the weighting scheme are not clear to me either. Using the proposed method, the probability of edges between high-degree nodes are down-weighted, since the ground-truth labels are divided by the node degrees. Since these weighted labels are then fitted using a least-squares loss, this implies that links between high-degree nodes should be less likely, which seems the opposite of what the scores should look like. With regard to the significance of the contributions: Using a least-squares loss in combination with tensor methods is attractive because it enables ALS algorithms with closed-form updates that can be computed very fast. However, the proposed method still relies on SGD optimization. In this context, it is not clear to me why a tensor framework/least-squares loss would be preferable. Further comments: - The paper seems to equate "tensor method" with using a least squares loss. However, this doesn*t have to be the case. For instance see [1,2] which propose Logistic and Poisson tensor factorizations, respectively. - The distinction between tensor factorization and neural methods is unclear. Tensor factorization can be interpreted just as a particular scoring function. For instance, see [5] for a detailed discussion. - The margin based ranking loss has been proposed earlier than in (Collobert et al, 2011). For instance see [3] - p1: corrupted triples are not described entirely correct, typically only one of s or o is corrputed. - Closed-form tensor in Table 1: This should be least-squares loss of f(s,p,o) and log(...)? - p6: Adding the constant to the tensor as proposed in (Levy & Goldberg, 2014) can done while gathering the minibatch and is therefore equivalent to the proposed approach. [1] Nickel et al: Logistic Tensor Factorization for Multi-Relational Data, 2013. [2] Chi et al: "On tensors, sparsity, and nonnegative factorizations", 2012 [3] Collobert et al: A unified architecture for natural language processing, 2008 [4] Dong et al: Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion, 2014 [5] Nickel et al: A Review of Relational Machine Learning for Knowledge Graphs, 2016. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRevisiting Knowledge Base Embedding as Tensor Decomposition\nWe study the problem of knowledge base (KB) embedding, which is usually addressed through two frameworks---neural KB embedding and tensor decomposition. In this work, we theoretically analyze the neural embedding framework and subsequently connect it with tensor based embedding. Specifically, we show that in neural KB embedding the two commonly adopted optimization solutions---margin-based and negative sampling losses---are closely related to each other. We also reach the closed-form tensor that is implicitly approximated by popular neural KB approaches, revealing the underlying connection between neural and tensor based KB embedding models. Grounded in the theoretical results, we further present a tensor decomposition based framework KBTD to directly approximate the derived closed form tensor. Under this framework, the neural KB embedding models, such as NTN, TransE, Bilinear, and DISTMULT, are unified into a general tensor optimization architecture. Finally, we conduct experiments on the link prediction task in WordNet and Freebase, empirically demonstrating the effectiveness of the KBTD framework.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes a new method to train knowledge base embeddings using a least-squares loss. For this purpose, the paper introduces a reweighting scheme of the entries in the original adjacency tensor. The reweighting is derived from an analysis of the cross-entropy loss. In addition, the paper discusses the connections of the margin and cross-entropy loss and evaluates the proposed method on WN18 and FB15k. The paper tackles an interesting problem, as learning from knowledge bases via embedding methods has become increasingly important for tasks such as question answering. Providing additional insight into current methods can be an important contribution to advance the state-of-the-art. However, I*m concerned about several aspects in the current form of the paper. For instance, the derivation in Section 4 is unclear to me, as eq.4 suddenly introduces a weighted sum over expectations using the degrees of nodes. The derivation also seems to rely on a very specific negative sampling assumption (uniform sampling without checking whether the corrupted triple is a true negative). This sampling method isn*t used consistently across models and also brings its own problems, e.g., see the LCWA discussion in [4] In addition, the semantics that are introduced by the weighting scheme are not clear to me either. Using the proposed method, the probability of edges between high-degree nodes are down-weighted, since the ground-truth labels are divided by the node degrees. Since these weighted labels are then fitted using a least-squares loss, this implies that links between high-degree nodes should be less likely, which seems the opposite of what the scores should look like. With regard to the significance of the contributions: Using a least-squares loss in combination with tensor methods is attractive because it enables ALS algorithms with closed-form updates that can be computed very fast. However, the proposed method still relies on SGD optimization. In this context, it is not clear to me why a tensor framework/least-squares loss would be preferable. Further comments: - The paper seems to equate \"tensor method\" with using a least squares loss. However, this doesn*t have to be the case. For instance see [1,2] which propose Logistic and Poisson tensor factorizations, respectively. - The distinction between tensor factorization and neural methods is unclear. Tensor factorization can be interpreted just as a particular scoring function. For instance, see [5] for a detailed discussion. - The margin based ranking loss has been proposed earlier than in (Collobert et al, 2011). For instance see [3] - p1: corrupted triples are not described entirely correct, typically only one of s or o is corrputed. - Closed-form tensor in Table 1: This should be least-squares loss of f(s,p,o) and log(...)? - p6: Adding the constant to the tensor as proposed in (Levy & Goldberg, 2014) can done while gathering the minibatch and is therefore equivalent to the proposed approach. [1] Nickel et al: Logistic Tensor Factorization for Multi-Relational Data, 2013. [2] Chi et al: \"On tensors, sparsity, and nonnegative factorizations\", 2012 [3] Collobert et al: A unified architecture for natural language processing, 2008 [4] Dong et al: Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion, 2014 [5] Nickel et al: A Review of Relational Machine Learning for Knowledge Graphs, 2016.",
"role": "assistant"
}
]
| [
" The paper proposes a new method to train knowledge base embeddings using a least-squares loss.",
"For this purpose, the paper introduces a reweighting scheme of the entries in the original adjacency tensor.",
"The reweighting is derived from an analysis of the cross-entropy loss.",
"In addition, the paper discusses the connections of the margin and cross-entropy loss and evaluates the proposed method on WN18 and FB15k.",
"The paper tackles an interesting problem, as learning from knowledge bases via embedding methods has become increasingly important for tasks such as question answering.",
"Providing additional insight into current methods can be an important contribution to advance the state-of-the-art.",
"However, I*m concerned about several aspects in the current form of the paper.",
"For instance, the derivation in Section 4 is unclear to me, as eq.4 suddenly introduces a weighted sum over expectations using the degrees of nodes.",
"The derivation also seems to rely on a very specific negative sampling assumption (uniform sampling without checking whether the corrupted triple is a true negative).",
"This sampling method isn*t used consistently across models and also brings its own problems, e.g., see the LCWA discussion in [4] In addition, the semantics that are introduced by the weighting scheme are not clear to me either.",
"Using the proposed method, the probability of edges between high-degree nodes are down-weighted, since the ground-truth labels are divided by the node degrees.",
"Since these weighted labels are then fitted using a least-squares loss, this implies that links between high-degree nodes should be less likely, which seems the opposite of what the scores should look like.",
"With regard to the significance of the contributions: Using a least-squares loss in combination with tensor methods is attractive because it enables ALS algorithms with closed-form updates that can be computed very fast.",
"However, the proposed method still relies on SGD optimization.",
"In this context, it is not clear to me why a tensor framework/least-squares loss would be preferable.",
"Further comments: - The paper seems to equate \"tensor method\" with using a least squares loss.",
"However, this doesn*t have to be the case.",
"For instance see [1,2] which propose Logistic and Poisson tensor factorizations, respectively.",
"- The distinction between tensor factorization and neural methods is unclear.",
"Tensor factorization can be interpreted just as a particular scoring function.",
"For instance, see [5] for a detailed discussion.",
"- The margin based ranking loss has been proposed earlier than in (Collobert et al, 2011).",
"For instance see [3] - p1: corrupted triples are not described entirely correct, typically only one of s or o is corrputed.",
"- Closed-form tensor in Table 1: This should be least-squares loss of f(s,p,o) and log(...)?",
"- p6: Adding the constant to the tensor as proposed in (Levy & Goldberg, 2014) can done while gathering the minibatch and is therefore equivalent to the proposed approach.",
"[1] Nickel et al: Logistic Tensor Factorization for Multi-Relational Data, 2013.",
"[2] Chi et al: \"On tensors, sparsity, and nonnegative factorizations\", 2012 [3] Collobert et al: A unified architecture for natural language processing, 2008 [4] Dong et al: Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion, 2014 [5] Nickel et al: A Review of Relational Machine Learning for Knowledge Graphs, 2016."
] | {
"criticism": 7,
"example": 6,
"importance_and_relevance": 3,
"materials_and_methods": 19,
"praise": 3,
"presentation_and_reporting": 6,
"results_and_discussion": 9,
"suggestion_and_solution": 5,
"total": 27
} | 0.259259 | 0.222222 | 0.111111 | 0.703704 | 0.111111 | 0.222222 | 0.333333 | 0.185185 | 27 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.148148 | -0.945208 | 3.093356 |
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor | Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds. | Quality and clarity: It seems that the authors can do a better job to improve the readability of the paper and its conciseness. The current structure of paper seems a bit suboptimal to me. The first 6.5 page of the paper is used to explain the idea of RL with entropy reward and how it can be extended to the case of parametrized value function and policy and then the whole experimental results is packed in only 1 page. I think the paper could be organized in a more balanced way by providing a more detailed description and analysis of the numerical results, especially given the fact that in my opinion this is the main strength of the paper. Finally some of the claims made in this paper is not really justified. For instance "Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods" not true, e.g., look at Deepmind AI recent work: https://arxiv.org/abs/1704.04651. Originality and novelty: I think much of the ideas considered in this paper is already explored in previous work as it is acknowledged in the paper. However some of the techniques such as the way the policy is represented and the way the policy gradient formulation is approximated seems to be novel in the context of Deep RL though again these ideas have been explored in the literature of control and RL extensively. Significance: I think the improvement on baseline in control suites is very impressive the problem is that the details of the implementation of algorithm e.g. architecture of neural network size of memory replay, the schedule the target network is implemented is not sufficiently explained in the paper so it is difficult to evaluate these results. Also the paper only compares with the original version of the baseline algorithms. These algorithms are improved since then and the new more efficient algorithms such as distributional policy gradients and NAF have been developed. So it would help to have a better understanding of this result if the paper compares with the state-of-the-art baselines. Minor: for some reason different algorithms have ran for different number of steps which is a bit confusing. would be great if this is fixed in the future version. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSoft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor\nModel-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Quality and clarity: It seems that the authors can do a better job to improve the readability of the paper and its conciseness. The current structure of paper seems a bit suboptimal to me. The first 6.5 page of the paper is used to explain the idea of RL with entropy reward and how it can be extended to the case of parametrized value function and policy and then the whole experimental results is packed in only 1 page. I think the paper could be organized in a more balanced way by providing a more detailed description and analysis of the numerical results, especially given the fact that in my opinion this is the main strength of the paper. Finally some of the claims made in this paper is not really justified. For instance \"Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods\" not true, e.g., look at Deepmind AI recent work: https://arxiv.org/abs/1704.04651. Originality and novelty: I think much of the ideas considered in this paper is already explored in previous work as it is acknowledged in the paper. However some of the techniques such as the way the policy is represented and the way the policy gradient formulation is approximated seems to be novel in the context of Deep RL though again these ideas have been explored in the literature of control and RL extensively. Significance: I think the improvement on baseline in control suites is very impressive the problem is that the details of the implementation of algorithm e.g. architecture of neural network size of memory replay, the schedule the target network is implemented is not sufficiently explained in the paper so it is difficult to evaluate these results. Also the paper only compares with the original version of the baseline algorithms. These algorithms are improved since then and the new more efficient algorithms such as distributional policy gradients and NAF have been developed. So it would help to have a better understanding of this result if the paper compares with the state-of-the-art baselines. Minor: for some reason different algorithms have ran for different number of steps which is a bit confusing. would be great if this is fixed in the future version.",
"role": "assistant"
}
]
| [
" Quality and clarity: It seems that the authors can do a better job to improve the readability of the paper and its conciseness.",
"The current structure of paper seems a bit suboptimal to me.",
"The first 6.5 page of the paper is used to explain the idea of RL with entropy reward and how it can be extended to the case of parametrized value function and policy and then the whole experimental results is packed in only 1 page.",
"I think the paper could be organized in a more balanced way by providing a more detailed description and analysis of the numerical results, especially given the fact that in my opinion this is the main strength of the paper.",
"Finally some of the claims made in this paper is not really justified.",
"For instance \"Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods\" not true, e.g., look at Deepmind AI recent work: https://arxiv.org/abs/1704.04651.",
"Originality and novelty: I think much of the ideas considered in this paper is already explored in previous work as it is acknowledged in the paper.",
"However some of the techniques such as the way the policy is represented and the way the policy gradient formulation is approximated seems to be novel in the context of Deep RL though again these ideas have been explored in the literature of control and RL extensively.",
"Significance: I think the improvement on baseline in control suites is very impressive the problem is that the details of the implementation of algorithm e.g.",
"architecture of neural network size of memory replay, the schedule the target network is implemented is not sufficiently explained in the paper so it is difficult to evaluate these results.",
"Also the paper only compares with the original version of the baseline algorithms.",
"These algorithms are improved since then and the new more efficient algorithms such as distributional policy gradients and NAF have been developed.",
"So it would help to have a better understanding of this result if the paper compares with the state-of-the-art baselines.",
"Minor: for some reason different algorithms have ran for different number of steps which is a bit confusing.",
"would be great if this is fixed in the future version."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 3,
"presentation_and_reporting": 4,
"results_and_discussion": 5,
"suggestion_and_solution": 4,
"total": 15
} | 0.266667 | 0.066667 | 0.133333 | 0.6 | 0.2 | 0.266667 | 0.333333 | 0.266667 | 15 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.133333 | 2.070204 | 0.06313 |
Realtime query completion via deep language models | Search engine users nowadays heavily depend on query completion and correction to shape their queries. Typically, the completion is done by database lookup which does not understand the context and cannot generalize to prefixes not in the database. In the paper, we propose to use unsupervised deep language models to complete and correct the queries given an arbitrary prefix. We show how to address two main challenges that renders this method practical for large-scale deployment: 1) we propose a method for integrating error correction into the language model completion via a edit-distance potential and a variant of beam search that can exploit these potential functions; and 2) we show how to efficiently perform CPU-based computation to complete the queries, with error correction, in real time (generating top 10 completions within 16 ms). Experiments show that the method substantially increases hit rate over standard approaches, and is capable of handling tail queries. | This paper presents methods for query completion that includes prefix correction, and some engineering details to meet particular latency requirements on a CPU. Regarding the latter methods: what is described in the paper sounds like competent engineering details that those performing such a task for launch in a real service would figure out how to accomplish, and the specific reported details may or may not represent the *right* way to go about this versus other choices that might be made. The final threshold for *successful* speedups feels somewhat arbitrary -- why 16ms in particular? In any case, these methods are useful to document, but derive their value mainly from the fact that they allow the use of the completion/correction methods that are the primary contribution of the paper. While the idea of integrating the spelling error probability into the search for completions is a sound one, the specific details of the model being pursued feel very ad hoc, which diminishes the ultimate impact of these results. Specifically, estimating the log probability to be proportional to the number of edits in the Levenshtein distance is really not the right thing to do at all. Under such an approach, the unedited string receives probability one, which doesn*t leave much additional probability mass for the other candidates -- not to mention that the number of possible misspellings would require some aggressive normalization. Even under the assumption that a normalized edit probability is not particularly critical (an issue that was not raised at all in the paper, let alone assessed), the fact is that the assumptions of independent errors and a single substitution cost are grossly invalid in natural language. For example, the probability p_1 of *pkoe* versus p_2 of *zoze* as likely versions of *poke* (as, say, the prefix of pokemon, as in your example) should be such that p_1 >>> p_2, not equal as they are in your model. Probabilistic models of string distance have been common since Ristad and Yianlios in the late 90s, and there are proper probabilistic models that would work with your same dynamic programming algorithm, as well as improved models with some modest state splitting. And even with very simple assumptions some unsupervised training could be used to yield at least a properly normalized model. It may very well end up that your very simple model does as well as a well estimated model, but that is something to establish in your paper, not assume. That such shortcomings are not noted in the paper is troublesome, particularly for a conference like ICLR that is focused on learned models, which this is not. As the primary contribution of the paper is this method for combining correction with completion, this shortcoming in the paper is pretty serious. Some other comments: Your presentation of completion cost versus edit cost separation in section 3.3 is not particularly clear, partly since the methods are discussed prior to this point as extension of (possibly corrected) prefixes. In fact, it seems that your completion model also includes extension of words with end point prior to the end of the prefix -- which doesn*t match your prior notation, or, frankly, the way in which the experimental results are described. The notation that you use is a bit sloppy and not everything is introduced in a clear way. For example, the s_0:m notation is introduced before indicating that s_i would be the symbol in the i_th position (which you use in section 3.3). Also, you claim that s_0 is the empty string, but isn*t it more correct to model this symbol as the beginning of string symbol? If not, what is the difference between s_0:m and s_1:m? If s_0 is start of string, the s_0:m is of length m+1 not length m. You spend too much time on common, well-known information, such as the LSTM equations. (you don*t need them, but also why number if you never refer to them later?) Also the dynamic programming for Levenshtein is foundational, not required to present that algorithm in detail, unless there is something specific that you need to point out there (which your section 3.3 modification really doesn*t require to make that point). Is there a specific use scenario for the prefix splitting, other than for the evaluation of unseen prefixes? This doesn*t strike me as the most effective way to try to assess the seen/unseen distinction, since, as I understand the procedure, you will end up with very common prefixes alongside less common prefixes in your validation set, which doesn*t really correspond to true *unseen* scenarios. I think another way of teasing apart such results would be recommended. You never explicitly mention what your training loss is in section 5.1. Overall, while this is an interesting and important problem, and the engineering details are interesting and reasonably well-motivated, the main contribution of the paper is based on a pretty flawed approach to modeling correction probability, which would limit the ultimate applicability of the methods. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRealtime query completion via deep language models\nSearch engine users nowadays heavily depend on query completion and correction to shape their queries. Typically, the completion is done by database lookup which does not understand the context and cannot generalize to prefixes not in the database. In the paper, we propose to use unsupervised deep language models to complete and correct the queries given an arbitrary prefix. We show how to address two main challenges that renders this method practical for large-scale deployment: 1) we propose a method for integrating error correction into the language model completion via a edit-distance potential and a variant of beam search that can exploit these potential functions; and 2) we show how to efficiently perform CPU-based computation to complete the queries, with error correction, in real time (generating top 10 completions within 16 ms). Experiments show that the method substantially increases hit rate over standard approaches, and is capable of handling tail queries.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents methods for query completion that includes prefix correction, and some engineering details to meet particular latency requirements on a CPU. Regarding the latter methods: what is described in the paper sounds like competent engineering details that those performing such a task for launch in a real service would figure out how to accomplish, and the specific reported details may or may not represent the *right* way to go about this versus other choices that might be made. The final threshold for *successful* speedups feels somewhat arbitrary -- why 16ms in particular? In any case, these methods are useful to document, but derive their value mainly from the fact that they allow the use of the completion/correction methods that are the primary contribution of the paper. While the idea of integrating the spelling error probability into the search for completions is a sound one, the specific details of the model being pursued feel very ad hoc, which diminishes the ultimate impact of these results. Specifically, estimating the log probability to be proportional to the number of edits in the Levenshtein distance is really not the right thing to do at all. Under such an approach, the unedited string receives probability one, which doesn*t leave much additional probability mass for the other candidates -- not to mention that the number of possible misspellings would require some aggressive normalization. Even under the assumption that a normalized edit probability is not particularly critical (an issue that was not raised at all in the paper, let alone assessed), the fact is that the assumptions of independent errors and a single substitution cost are grossly invalid in natural language. For example, the probability p_1 of *pkoe* versus p_2 of *zoze* as likely versions of *poke* (as, say, the prefix of pokemon, as in your example) should be such that p_1 >>> p_2, not equal as they are in your model. Probabilistic models of string distance have been common since Ristad and Yianlios in the late 90s, and there are proper probabilistic models that would work with your same dynamic programming algorithm, as well as improved models with some modest state splitting. And even with very simple assumptions some unsupervised training could be used to yield at least a properly normalized model. It may very well end up that your very simple model does as well as a well estimated model, but that is something to establish in your paper, not assume. That such shortcomings are not noted in the paper is troublesome, particularly for a conference like ICLR that is focused on learned models, which this is not. As the primary contribution of the paper is this method for combining correction with completion, this shortcoming in the paper is pretty serious. Some other comments: Your presentation of completion cost versus edit cost separation in section 3.3 is not particularly clear, partly since the methods are discussed prior to this point as extension of (possibly corrected) prefixes. In fact, it seems that your completion model also includes extension of words with end point prior to the end of the prefix -- which doesn*t match your prior notation, or, frankly, the way in which the experimental results are described. The notation that you use is a bit sloppy and not everything is introduced in a clear way. For example, the s_0:m notation is introduced before indicating that s_i would be the symbol in the i_th position (which you use in section 3.3). Also, you claim that s_0 is the empty string, but isn*t it more correct to model this symbol as the beginning of string symbol? If not, what is the difference between s_0:m and s_1:m? If s_0 is start of string, the s_0:m is of length m+1 not length m. You spend too much time on common, well-known information, such as the LSTM equations. (you don*t need them, but also why number if you never refer to them later?) Also the dynamic programming for Levenshtein is foundational, not required to present that algorithm in detail, unless there is something specific that you need to point out there (which your section 3.3 modification really doesn*t require to make that point). Is there a specific use scenario for the prefix splitting, other than for the evaluation of unseen prefixes? This doesn*t strike me as the most effective way to try to assess the seen/unseen distinction, since, as I understand the procedure, you will end up with very common prefixes alongside less common prefixes in your validation set, which doesn*t really correspond to true *unseen* scenarios. I think another way of teasing apart such results would be recommended. You never explicitly mention what your training loss is in section 5.1. Overall, while this is an interesting and important problem, and the engineering details are interesting and reasonably well-motivated, the main contribution of the paper is based on a pretty flawed approach to modeling correction probability, which would limit the ultimate applicability of the methods.",
"role": "assistant"
}
]
| [
" This paper presents methods for query completion that includes prefix correction, and some engineering details to meet particular latency requirements on a CPU.",
"Regarding the latter methods: what is described in the paper sounds like competent engineering details that those performing such a task for launch in a real service would figure out how to accomplish, and the specific reported details may or may not represent the *right* way to go about this versus other choices that might be made.",
"The final threshold for *successful* speedups feels somewhat arbitrary -- why 16ms in particular?",
"In any case, these methods are useful to document, but derive their value mainly from the fact that they allow the use of the completion/correction methods that are the primary contribution of the paper.",
"While the idea of integrating the spelling error probability into the search for completions is a sound one, the specific details of the model being pursued feel very ad hoc, which diminishes the ultimate impact of these results.",
"Specifically, estimating the log probability to be proportional to the number of edits in the Levenshtein distance is really not the right thing to do at all.",
"Under such an approach, the unedited string receives probability one, which doesn*t leave much additional probability mass for the other candidates -- not to mention that the number of possible misspellings would require some aggressive normalization.",
"Even under the assumption that a normalized edit probability is not particularly critical (an issue that was not raised at all in the paper, let alone assessed), the fact is that the assumptions of independent errors and a single substitution cost are grossly invalid in natural language.",
"For example, the probability p_1 of *pkoe* versus p_2 of *zoze* as likely versions of *poke* (as, say, the prefix of pokemon, as in your example) should be such that p_1 >>> p_2, not equal as they are in your model.",
"Probabilistic models of string distance have been common since Ristad and Yianlios in the late 90s, and there are proper probabilistic models that would work with your same dynamic programming algorithm, as well as improved models with some modest state splitting.",
"And even with very simple assumptions some unsupervised training could be used to yield at least a properly normalized model.",
"It may very well end up that your very simple model does as well as a well estimated model, but that is something to establish in your paper, not assume.",
"That such shortcomings are not noted in the paper is troublesome, particularly for a conference like ICLR that is focused on learned models, which this is not.",
"As the primary contribution of the paper is this method for combining correction with completion, this shortcoming in the paper is pretty serious.",
"Some other comments: Your presentation of completion cost versus edit cost separation in section 3.3 is not particularly clear, partly since the methods are discussed prior to this point as extension of (possibly corrected) prefixes.",
"In fact, it seems that your completion model also includes extension of words with end point prior to the end of the prefix -- which doesn*t match your prior notation, or, frankly, the way in which the experimental results are described.",
"The notation that you use is a bit sloppy and not everything is introduced in a clear way.",
"For example, the s_0:m notation is introduced before indicating that s_i would be the symbol in the i_th position (which you use in section 3.3).",
"Also, you claim that s_0 is the empty string, but isn*t it more correct to model this symbol as the beginning of string symbol?",
"If not, what is the difference between s_0:m and s_1:m?",
"If s_0 is start of string, the s_0:m is of length m+1 not length m. You spend too much time on common, well-known information, such as the LSTM equations.",
"(you don*t need them, but also why number if you never refer to them later?)",
"Also the dynamic programming for Levenshtein is foundational, not required to present that algorithm in detail, unless there is something specific that you need to point out there (which your section 3.3 modification really doesn*t require to make that point).",
"Is there a specific use scenario for the prefix splitting, other than for the evaluation of unseen prefixes?",
"This doesn*t strike me as the most effective way to try to assess the seen/unseen distinction, since, as I understand the procedure, you will end up with very common prefixes alongside less common prefixes in your validation set, which doesn*t really correspond to true *unseen* scenarios.",
"I think another way of teasing apart such results would be recommended.",
"You never explicitly mention what your training loss is in section 5.1.",
"Overall, while this is an interesting and important problem, and the engineering details are interesting and reasonably well-motivated, the main contribution of the paper is based on a pretty flawed approach to modeling correction probability, which would limit the ultimate applicability of the methods."
] | {
"criticism": 13,
"example": 2,
"importance_and_relevance": 4,
"materials_and_methods": 20,
"praise": 2,
"presentation_and_reporting": 14,
"results_and_discussion": 10,
"suggestion_and_solution": 4,
"total": 28
} | 0.464286 | 0.071429 | 0.142857 | 0.714286 | 0.071429 | 0.5 | 0.357143 | 0.142857 | 28 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.464286 | -1.086761 | 3.551047 |
Discovery of Predictive Representations With a Network of General Value Functions | The ability of an agent to {em discover} its own learning objectives has long been considered a key ingredient for artificial general intelligence. Breakthroughs in autonomous decision making and reinforcement learning have primarily been in domains where the agent*s goal is outlined and clear: such as playing a game to win, or driving safely. Several studies have demonstrated that learning extramural sub-tasks and auxiliary predictions can improve (1) single human-specified task learning, (2) transfer of learning, (3) and the agent*s learned representation of the world. In all these examples, the agent was instructed what to learn about. We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent*s representation of the world. Specifically, our system maintains a large collection of predictions, continually pruning and replacing predictions. We highlight the importance of considering stability rather than convergence for such a system, and develop an adaptive, regularized algorithm towards that aim. We provide several experiments in computational micro-worlds demonstrating that this simple approach can be effective for discovering useful predictions autonomously. | I have to say that I do not have all the background of this paper, and the paper is not written very clearly. I think the major contribution of the paper is represented in a very vague way. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDiscovery of Predictive Representations With a Network of General Value Functions\nThe ability of an agent to {em discover} its own learning objectives has long been considered a key ingredient for artificial general intelligence. Breakthroughs in autonomous decision making and reinforcement learning have primarily been in domains where the agent*s goal is outlined and clear: such as playing a game to win, or driving safely. Several studies have demonstrated that learning extramural sub-tasks and auxiliary predictions can improve (1) single human-specified task learning, (2) transfer of learning, (3) and the agent*s learned representation of the world. In all these examples, the agent was instructed what to learn about. We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent*s representation of the world. Specifically, our system maintains a large collection of predictions, continually pruning and replacing predictions. We highlight the importance of considering stability rather than convergence for such a system, and develop an adaptive, regularized algorithm towards that aim. We provide several experiments in computational micro-worlds demonstrating that this simple approach can be effective for discovering useful predictions autonomously.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n I have to say that I do not have all the background of this paper, and the paper is not written very clearly. I think the major contribution of the paper is represented in a very vague way.",
"role": "assistant"
}
]
| [
" I have to say that I do not have all the background of this paper, and the paper is not written very clearly.",
"I think the major contribution of the paper is represented in a very vague way."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 0,
"praise": 0,
"presentation_and_reporting": 2,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 2
} | 1 | 0 | 0.5 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2.5 | 0.590326 | 1.909674 |
On the Use of Word Embeddings Alone to Represent Natural Language Sequences | To construct representations for natural language sequences, information from two main sources needs to be captured: (i) semantic meaning of individual words, and (ii) their compositionality. These two types of information are usually represented in the form of word embeddings and compositional functions, respectively. For the latter, Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) have been considered. There has not been a rigorous evaluation regarding the relative importance of each component to different text-representation-based tasks; i.e., how important is the modeling capacity of word embeddings alone, relative to the added value of a compositional function? In this paper, we conduct an extensive comparative study between Simple Word Embeddings-based Models (SWEMs), with no compositional parameters, relative to employing word embeddings within RNN/CNN-based models. Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered. Moreover, in a new SWEM setup, we propose to employ a max-pooling operation over the learned word-embedding matrix of a given sentence. This approach is demonstrated to extract complementary features relative to the averaging operation standard to SWEMs, while endowing our model with better interpretability. To further validate our observations, we examine the information utilized by different models to make predictions, revealing interesting properties of word embeddings. | This paper extensively compares simple word embedding based models (SWEMs) to RNN/CNN based-models on a suite of NLP tasks. Experiments on document classification, sentence classification, and natural language sequence matching show that SWEMs perform competitively or even better in the majority of cases. The authors also propose to use max pooling to complement average pooling for combining information from word embeddings in a SWEM model to improve interpretability. While there is not much contribution in terms of technical novelty, I think this is an interesting paper that sheds new lights on limitations of existing methods for learning sentence and document representations. The paper is well written and the experiments are quite convincing. - An interesting finding is that word embeddings are better for longer documents, whereas RNN/CNN models are better for shorter text. Do the authors have any sense on whether this is because of the difficulty in training an RNN/CNN model for long documents or whether compositions are not necessary since there are multiple predictive independent cues in a long text? - It would be useful to include a linear classification model that takes the word embeddings as an input in the comparison (SWEM-learned). - How crucial is it to retrain the word embeddings on the task of interest (from GloVe initialization) to obtain good performance? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the Use of Word Embeddings Alone to Represent Natural Language Sequences\nTo construct representations for natural language sequences, information from two main sources needs to be captured: (i) semantic meaning of individual words, and (ii) their compositionality. These two types of information are usually represented in the form of word embeddings and compositional functions, respectively. For the latter, Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) have been considered. There has not been a rigorous evaluation regarding the relative importance of each component to different text-representation-based tasks; i.e., how important is the modeling capacity of word embeddings alone, relative to the added value of a compositional function? In this paper, we conduct an extensive comparative study between Simple Word Embeddings-based Models (SWEMs), with no compositional parameters, relative to employing word embeddings within RNN/CNN-based models. Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered. Moreover, in a new SWEM setup, we propose to employ a max-pooling operation over the learned word-embedding matrix of a given sentence. This approach is demonstrated to extract complementary features relative to the averaging operation standard to SWEMs, while endowing our model with better interpretability. To further validate our observations, we examine the information utilized by different models to make predictions, revealing interesting properties of word embeddings.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper extensively compares simple word embedding based models (SWEMs) to RNN/CNN based-models on a suite of NLP tasks. Experiments on document classification, sentence classification, and natural language sequence matching show that SWEMs perform competitively or even better in the majority of cases. The authors also propose to use max pooling to complement average pooling for combining information from word embeddings in a SWEM model to improve interpretability. While there is not much contribution in terms of technical novelty, I think this is an interesting paper that sheds new lights on limitations of existing methods for learning sentence and document representations. The paper is well written and the experiments are quite convincing. - An interesting finding is that word embeddings are better for longer documents, whereas RNN/CNN models are better for shorter text. Do the authors have any sense on whether this is because of the difficulty in training an RNN/CNN model for long documents or whether compositions are not necessary since there are multiple predictive independent cues in a long text? - It would be useful to include a linear classification model that takes the word embeddings as an input in the comparison (SWEM-learned). - How crucial is it to retrain the word embeddings on the task of interest (from GloVe initialization) to obtain good performance?",
"role": "assistant"
}
]
| [
" This paper extensively compares simple word embedding based models (SWEMs) to RNN/CNN based-models on a suite of NLP tasks.",
"Experiments on document classification, sentence classification, and natural language sequence matching show that SWEMs perform competitively or even better in the majority of cases.",
"The authors also propose to use max pooling to complement average pooling for combining information from word embeddings in a SWEM model to improve interpretability.",
"While there is not much contribution in terms of technical novelty, I think this is an interesting paper that sheds new lights on limitations of existing methods for learning sentence and document representations.",
"The paper is well written and the experiments are quite convincing.",
"- An interesting finding is that word embeddings are better for longer documents, whereas RNN/CNN models are better for shorter text.",
"Do the authors have any sense on whether this is because of the difficulty in training an RNN/CNN model for long documents or whether compositions are not necessary since there are multiple predictive independent cues in a long text?",
"- It would be useful to include a linear classification model that takes the word embeddings as an input in the comparison (SWEM-learned).",
"- How crucial is it to retrain the word embeddings on the task of interest (from GloVe initialization) to obtain good performance?"
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 8,
"praise": 3,
"presentation_and_reporting": 7,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 9
} | 0 | 0 | 0.111111 | 0.888889 | 0.333333 | 0.777778 | 0.111111 | 0.333333 | 9 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.555556 | 2.303037 | 0.252519 |
Variance-based Gradient Compression for Efficient Distributed Deep Learning | Due to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance and prove that it can be obtained with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments. | The authors propose a new gradient compression method for efficient distributed training of neural networks. The authors propose a novel way of measuring ambiguity based on the variance of the gradients. In the experiment, the proposed method shows no or slight degradation of accuracy with big savings in communication cost. The proposed method can easily be combined with other existing method, i.e., Storm (2015), based on the absolute value of the gradient and shows further efficiency. The paper is well written: clear and easy to understand. The proposed method is simple yet powerful. Particularly, I found it interesting to re-evaluate the variance with (virtually) increasing larger batch size. The performance shown in the experiments is also impressive. I found it would have also been interesting and helpful to define and show a new metric that incorporates both accuracy and compression rate into a single metric, e.g., how much accuracy is lost (or gained) per compression rate relatively to the baseline of no compression. With this metric, the comparison would be easier and more intuitive. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVariance-based Gradient Compression for Efficient Distributed Deep Learning\nDue to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance and prove that it can be obtained with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors propose a new gradient compression method for efficient distributed training of neural networks. The authors propose a novel way of measuring ambiguity based on the variance of the gradients. In the experiment, the proposed method shows no or slight degradation of accuracy with big savings in communication cost. The proposed method can easily be combined with other existing method, i.e., Storm (2015), based on the absolute value of the gradient and shows further efficiency. The paper is well written: clear and easy to understand. The proposed method is simple yet powerful. Particularly, I found it interesting to re-evaluate the variance with (virtually) increasing larger batch size. The performance shown in the experiments is also impressive. I found it would have also been interesting and helpful to define and show a new metric that incorporates both accuracy and compression rate into a single metric, e.g., how much accuracy is lost (or gained) per compression rate relatively to the baseline of no compression. With this metric, the comparison would be easier and more intuitive.",
"role": "assistant"
}
]
| [
" The authors propose a new gradient compression method for efficient distributed training of neural networks.",
"The authors propose a novel way of measuring ambiguity based on the variance of the gradients.",
"In the experiment, the proposed method shows no or slight degradation of accuracy with big savings in communication cost.",
"The proposed method can easily be combined with other existing method, i.e., Storm (2015), based on the absolute value of the gradient and shows further efficiency.",
"The paper is well written: clear and easy to understand.",
"The proposed method is simple yet powerful.",
"Particularly, I found it interesting to re-evaluate the variance with (virtually) increasing larger batch size.",
"The performance shown in the experiments is also impressive.",
"I found it would have also been interesting and helpful to define and show a new metric that incorporates both accuracy and compression rate into a single metric, e.g., how much accuracy is lost (or gained) per compression rate relatively to the baseline of no compression.",
"With this metric, the comparison would be easier and more intuitive."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 9,
"praise": 4,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 4,
"total": 10
} | 0 | 0 | 0.1 | 0.9 | 0.4 | 0.2 | 0.4 | 0.4 | 10 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.4 | 2.257958 | 0.142042 |
To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression | Model pruning seeks to induce sparsity in a deep neural network*s various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model*s dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy. | Summary: This paper presents a thorough examination of the effects of pruning on model performance. Importantly, they compare the performance of "large-sparse" models (large models that underwent pruning in order to reduce memory footprint of model) and "small-dense" models, showing that "large-sparse" models typically perform better than the "small-dense" models of comparable size (in terms of number of non-zero parameters, and/or memory footprint). They present results across a number of domains (computer vision, language modelling, and neural machine translation) and model types (CNNs, LSTMs). They also propose a way of performing pruning with a pre-defined sparsity schedule, simplifying the pruning process in a way which works across domains. They are able to show convincingly that pruning is an effective way of trading off accuracy for model size (more effective than simply reducing the size of model architecture), although there does come a point where too much sparsity degrades the model performance considerably; this suggests that pruning a medium size model to 80%-90% sparsity is likely better than pruning a larger model to >= 95% sparsity. Review: Quality: The quality of the work is high --- the experiments are extensive and thorough. I would have liked to see "small-dense" vs. "large-sparse" comparisons on Inception (only large-sparse results are reported). Clarity: The paper is clearly written, though there is room for improvement. For example, many of the results are presented in a redundant manner (in both tables and figures, where the table and figure are often not next to each other in the document). Also, it is not clear in several cases exactly which training/heldout/test sets are used, and on which partition of the data the accuracies/BLEU scores/perplexities presented correspond to. A small section (before "Methods") describing the datasets/features in detail would be helpful. Also, it would have probably been nice to explain all of the tasks and datasets early on, and then present all the results at once (NIT: include the plots in paper, and move the tables to an appendix). Originality: Although the experiments are informative, the work as a whole is not very original. The method proposed of using a sparsity schedule to perform pruning is simple and effective, but is a rather incremental contribution. The primary contribution of this paper is its experiments, which for the most part compare known methods. Significance: The paper makes a nice contribution, though it is not particularly significant or surprising. The primary observations are: (1) large-sparse is typically better than small-dense, for a fixed number of non-zero parameters and/or memory footprint. (2) There is a point at which increasing the sparsity percentage severely degrades the performance of the model, which suggests that there is a "sweet-spot" when it comes to choosing the model architecture and sparsity percentage which give the best performance (for a fixed memory footprint). Result #1 is not very surprising, given that Han et al (2016) were able to show significant compression without loss in accuracy; thus, because one would expect a smaller dense model to perform worse than the large dense model, it would also perform worse than the large sparse model. Result #2 had already been seen in Han et al (2016) (for example, in Figure 6). Pros: - Very thorough experiments across a number of domains Cons: - Methodological contributions are minor. - Results are not surprising, and are in line with previous papers. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTo Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression\nModel pruning seeks to induce sparsity in a deep neural network*s various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model*s dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: This paper presents a thorough examination of the effects of pruning on model performance. Importantly, they compare the performance of \"large-sparse\" models (large models that underwent pruning in order to reduce memory footprint of model) and \"small-dense\" models, showing that \"large-sparse\" models typically perform better than the \"small-dense\" models of comparable size (in terms of number of non-zero parameters, and/or memory footprint). They present results across a number of domains (computer vision, language modelling, and neural machine translation) and model types (CNNs, LSTMs). They also propose a way of performing pruning with a pre-defined sparsity schedule, simplifying the pruning process in a way which works across domains. They are able to show convincingly that pruning is an effective way of trading off accuracy for model size (more effective than simply reducing the size of model architecture), although there does come a point where too much sparsity degrades the model performance considerably; this suggests that pruning a medium size model to 80%-90% sparsity is likely better than pruning a larger model to >= 95% sparsity. Review: Quality: The quality of the work is high --- the experiments are extensive and thorough. I would have liked to see \"small-dense\" vs. \"large-sparse\" comparisons on Inception (only large-sparse results are reported). Clarity: The paper is clearly written, though there is room for improvement. For example, many of the results are presented in a redundant manner (in both tables and figures, where the table and figure are often not next to each other in the document). Also, it is not clear in several cases exactly which training/heldout/test sets are used, and on which partition of the data the accuracies/BLEU scores/perplexities presented correspond to. A small section (before \"Methods\") describing the datasets/features in detail would be helpful. Also, it would have probably been nice to explain all of the tasks and datasets early on, and then present all the results at once (NIT: include the plots in paper, and move the tables to an appendix). Originality: Although the experiments are informative, the work as a whole is not very original. The method proposed of using a sparsity schedule to perform pruning is simple and effective, but is a rather incremental contribution. The primary contribution of this paper is its experiments, which for the most part compare known methods. Significance: The paper makes a nice contribution, though it is not particularly significant or surprising. The primary observations are: (1) large-sparse is typically better than small-dense, for a fixed number of non-zero parameters and/or memory footprint. (2) There is a point at which increasing the sparsity percentage severely degrades the performance of the model, which suggests that there is a \"sweet-spot\" when it comes to choosing the model architecture and sparsity percentage which give the best performance (for a fixed memory footprint). Result #1 is not very surprising, given that Han et al (2016) were able to show significant compression without loss in accuracy; thus, because one would expect a smaller dense model to perform worse than the large dense model, it would also perform worse than the large sparse model. Result #2 had already been seen in Han et al (2016) (for example, in Figure 6). Pros: - Very thorough experiments across a number of domains Cons: - Methodological contributions are minor. - Results are not surprising, and are in line with previous papers.",
"role": "assistant"
}
]
| [
" Summary: This paper presents a thorough examination of the effects of pruning on model performance.",
"Importantly, they compare the performance of \"large-sparse\" models (large models that underwent pruning in order to reduce memory footprint of model) and \"small-dense\" models, showing that \"large-sparse\" models typically perform better than the \"small-dense\" models of comparable size (in terms of number of non-zero parameters, and/or memory footprint).",
"They present results across a number of domains (computer vision, language modelling, and neural machine translation) and model types (CNNs, LSTMs).",
"They also propose a way of performing pruning with a pre-defined sparsity schedule, simplifying the pruning process in a way which works across domains.",
"They are able to show convincingly that pruning is an effective way of trading off accuracy for model size (more effective than simply reducing the size of model architecture), although there does come a point where too much sparsity degrades the model performance considerably; this suggests that pruning a medium size model to 80%-90% sparsity is likely better than pruning a larger model to >= 95% sparsity.",
"Review: Quality: The quality of the work is high --- the experiments are extensive and thorough.",
"I would have liked to see \"small-dense\" vs. \"large-sparse\" comparisons on Inception (only large-sparse results are reported).",
"Clarity: The paper is clearly written, though there is room for improvement.",
"For example, many of the results are presented in a redundant manner (in both tables and figures, where the table and figure are often not next to each other in the document).",
"Also, it is not clear in several cases exactly which training/heldout/test sets are used, and on which partition of the data the accuracies/BLEU scores/perplexities presented correspond to.",
"A small section (before \"Methods\") describing the datasets/features in detail would be helpful.",
"Also, it would have probably been nice to explain all of the tasks and datasets early on, and then present all the results at once (NIT: include the plots in paper, and move the tables to an appendix).",
"Originality: Although the experiments are informative, the work as a whole is not very original.",
"The method proposed of using a sparsity schedule to perform pruning is simple and effective, but is a rather incremental contribution.",
"The primary contribution of this paper is its experiments, which for the most part compare known methods.",
"Significance: The paper makes a nice contribution, though it is not particularly significant or surprising.",
"The primary observations are: (1) large-sparse is typically better than small-dense, for a fixed number of non-zero parameters and/or memory footprint.",
"(2) There is a point at which increasing the sparsity percentage severely degrades the performance of the model, which suggests that there is a \"sweet-spot\" when it comes to choosing the model architecture and sparsity percentage which give the best performance (for a fixed memory footprint).",
"Result #1 is not very surprising, given that Han et al (2016) were able to show significant compression without loss in accuracy; thus, because one would expect a smaller dense model to perform worse than the large dense model, it would also perform worse than the large sparse model.",
"Result #2 had already been seen in Han et al (2016) (for example, in Figure 6).",
"Pros: - Very thorough experiments across a number of domains Cons: - Methodological contributions are minor.",
"- Results are not surprising, and are in line with previous papers."
] | {
"criticism": 5,
"example": 3,
"importance_and_relevance": 2,
"materials_and_methods": 17,
"praise": 7,
"presentation_and_reporting": 5,
"results_and_discussion": 7,
"suggestion_and_solution": 3,
"total": 22
} | 0.227273 | 0.136364 | 0.090909 | 0.772727 | 0.318182 | 0.227273 | 0.318182 | 0.136364 | 22 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.227273 | 0.948896 | 1.278377 |
Critical Percolation as a Framework to Analyze the Training of Deep Networks | In this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes). Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures. We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task. The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit. Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum. We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs. We further support our claims with training experiments and numerical analysis of the cost function on networks with up to layers. | This paper thoroughly analyzes an algorithmic task (determining if two points in a maze are connected, which requires BFS to solve) by constructing an explicit ConvNet solution and analytically deriving properties of the loss surface around this analytical solution. They show that their analytical solution implements a form of BFS algorithm, characterize the probability of introducing "bugs" in the algorithm as the weights move away from the optimal solution, and how this influences the error surface for different depths. This analysis is conducted by drawing on results from the field of critical percolation in physics. Overall, I think this is a good paper and its core contribution is definitely valuable: it provides a novel analysis of an algorithmic task which sheds light on how and when the network fails to learn the algorithm, and in particular the role which initialization plays. The analysis is very thorough and the methods described may find use in analyzing other tasks. In particular, this could be a first step towards better understanding the optimization landscape of memory-augmented neural networks (Memory Networks, Neural Turing Machines, etc) which try to learn reasoning tasks or algorithms. It is well-known that these are sensitive to initialization and often require running the optimizer with multiple random seeds and picking the best one. This work actually explains the role of initialization for learning BFS and how certain types of initialization lead to poor solutions. I am curious if a similar analysis could be applied to methods evaluated on the bAbI question-answering tasks (which can be represented as graphs, like the maze task) and possibly yield better initialization or optimization schemes that would remove the need for multiple random seeds. With that being said, there is some work that needs to be done to make the paper clearer. In particular, many parts are quite technical and may not be accessible to a broader machine learning audience. It would be good if the authors spent more time developing intuition (through visualization for example) and move some of the more technical proofs to the appendix. Specifically: - I think Figure 3 in the appendix should be moved to the main text, to help understand the behavior of the analytical solution. - Top of page 5, when you describe the checkerboard BFS: please include a visualization somewhere, it could be in the Appendix. - Section 6: there is lots of math here, but the main results don*t obviously stand out. I would suggest highlighting equations 2 and 4 in some way (for example, proposition/lemma + proof), so that the casual reader can quickly see what the main results are. Interested readers can then work through the math if they want to. Also, some plots/visualizations of the loss surface given in Equations 4 and 5 would be very helpful. Also, although I found their work to be interesting after finishing the paper, I was initially confused by how the authors frame their work and where the paper was heading. They claim their contribution is in the analysis of loss surfaces (true) and neural nets applied to graph-structured inputs. This second part was confusing - although the maze can be viewed as a graph, many other works apply ConvNets to maze environments [1, 2, 3], and their work has little relation to other work on graph CNNs. Here the assumptions of locality and stationarity underlying CNNs are sensible and I don*t think the first paragraph in Section 3 justifying the use of the CNN on the maze environment is necessary. However, I think it would make much more sense to mention how their work relates to other neural network architectures which learn algorithms (such as the Neural Turing Machine and variants) or reasoning tasks more generally (for example, memory-augmented networks applied to the bAbI tasks). There are lots of small typos, please fix them. Here are a few: - "For L=16, batch size of 20, ...": not a complete sentence. - Right before 6.1.1: "when the these such" -> "when such" - Top of page 8: "it also have a" -> "it also has a", "when encountering larger dataset" -> "...datasets" - First sentence of 6.2: "we turn to the discuss a second" -> "we turn to the discussion of a second" - etc. Quality: High Clarity: medium-low Originality: high Significance: medium-high References: [1] https://arxiv.org/pdf/1602.02867.pdf [2] https://arxiv.org/pdf/1612.08810.pdf [3] https://arxiv.org/pdf/1707.03497.pdf | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCritical Percolation as a Framework to Analyze the Training of Deep Networks\nIn this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes). Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures. We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task. The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit. Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum. We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs. We further support our claims with training experiments and numerical analysis of the cost function on networks with up to layers.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper thoroughly analyzes an algorithmic task (determining if two points in a maze are connected, which requires BFS to solve) by constructing an explicit ConvNet solution and analytically deriving properties of the loss surface around this analytical solution. They show that their analytical solution implements a form of BFS algorithm, characterize the probability of introducing \"bugs\" in the algorithm as the weights move away from the optimal solution, and how this influences the error surface for different depths. This analysis is conducted by drawing on results from the field of critical percolation in physics. Overall, I think this is a good paper and its core contribution is definitely valuable: it provides a novel analysis of an algorithmic task which sheds light on how and when the network fails to learn the algorithm, and in particular the role which initialization plays. The analysis is very thorough and the methods described may find use in analyzing other tasks. In particular, this could be a first step towards better understanding the optimization landscape of memory-augmented neural networks (Memory Networks, Neural Turing Machines, etc) which try to learn reasoning tasks or algorithms. It is well-known that these are sensitive to initialization and often require running the optimizer with multiple random seeds and picking the best one. This work actually explains the role of initialization for learning BFS and how certain types of initialization lead to poor solutions. I am curious if a similar analysis could be applied to methods evaluated on the bAbI question-answering tasks (which can be represented as graphs, like the maze task) and possibly yield better initialization or optimization schemes that would remove the need for multiple random seeds. With that being said, there is some work that needs to be done to make the paper clearer. In particular, many parts are quite technical and may not be accessible to a broader machine learning audience. It would be good if the authors spent more time developing intuition (through visualization for example) and move some of the more technical proofs to the appendix. Specifically: - I think Figure 3 in the appendix should be moved to the main text, to help understand the behavior of the analytical solution. - Top of page 5, when you describe the checkerboard BFS: please include a visualization somewhere, it could be in the Appendix. - Section 6: there is lots of math here, but the main results don*t obviously stand out. I would suggest highlighting equations 2 and 4 in some way (for example, proposition/lemma + proof), so that the casual reader can quickly see what the main results are. Interested readers can then work through the math if they want to. Also, some plots/visualizations of the loss surface given in Equations 4 and 5 would be very helpful. Also, although I found their work to be interesting after finishing the paper, I was initially confused by how the authors frame their work and where the paper was heading. They claim their contribution is in the analysis of loss surfaces (true) and neural nets applied to graph-structured inputs. This second part was confusing - although the maze can be viewed as a graph, many other works apply ConvNets to maze environments [1, 2, 3], and their work has little relation to other work on graph CNNs. Here the assumptions of locality and stationarity underlying CNNs are sensible and I don*t think the first paragraph in Section 3 justifying the use of the CNN on the maze environment is necessary. However, I think it would make much more sense to mention how their work relates to other neural network architectures which learn algorithms (such as the Neural Turing Machine and variants) or reasoning tasks more generally (for example, memory-augmented networks applied to the bAbI tasks). There are lots of small typos, please fix them. Here are a few: - \"For L=16, batch size of 20, ...\": not a complete sentence. - Right before 6.1.1: \"when the these such\" -> \"when such\" - Top of page 8: \"it also have a\" -> \"it also has a\", \"when encountering larger dataset\" -> \"...datasets\" - First sentence of 6.2: \"we turn to the discuss a second\" -> \"we turn to the discussion of a second\" - etc. Quality: High Clarity: medium-low Originality: high Significance: medium-high References: [1] https://arxiv.org/pdf/1602.02867.pdf [2] https://arxiv.org/pdf/1612.08810.pdf [3] https://arxiv.org/pdf/1707.03497.pdf",
"role": "assistant"
}
]
| [
" This paper thoroughly analyzes an algorithmic task (determining if two points in a maze are connected, which requires BFS to solve) by constructing an explicit ConvNet solution and analytically deriving properties of the loss surface around this analytical solution.",
"They show that their analytical solution implements a form of BFS algorithm, characterize the probability of introducing \"bugs\" in the algorithm as the weights move away from the optimal solution, and how this influences the error surface for different depths.",
"This analysis is conducted by drawing on results from the field of critical percolation in physics.",
"Overall, I think this is a good paper and its core contribution is definitely valuable: it provides a novel analysis of an algorithmic task which sheds light on how and when the network fails to learn the algorithm, and in particular the role which initialization plays.",
"The analysis is very thorough and the methods described may find use in analyzing other tasks.",
"In particular, this could be a first step towards better understanding the optimization landscape of memory-augmented neural networks (Memory Networks, Neural Turing Machines, etc) which try to learn reasoning tasks or algorithms.",
"It is well-known that these are sensitive to initialization and often require running the optimizer with multiple random seeds and picking the best one.",
"This work actually explains the role of initialization for learning BFS and how certain types of initialization lead to poor solutions.",
"I am curious if a similar analysis could be applied to methods evaluated on the bAbI question-answering tasks (which can be represented as graphs, like the maze task) and possibly yield better initialization or optimization schemes that would remove the need for multiple random seeds.",
"With that being said, there is some work that needs to be done to make the paper clearer.",
"In particular, many parts are quite technical and may not be accessible to a broader machine learning audience.",
"It would be good if the authors spent more time developing intuition (through visualization for example) and move some of the more technical proofs to the appendix.",
"Specifically: - I think Figure 3 in the appendix should be moved to the main text, to help understand the behavior of the analytical solution.",
"- Top of page 5, when you describe the checkerboard BFS: please include a visualization somewhere, it could be in the Appendix.",
"- Section 6: there is lots of math here, but the main results don*t obviously stand out.",
"I would suggest highlighting equations 2 and 4 in some way (for example, proposition/lemma + proof), so that the casual reader can quickly see what the main results are.",
"Interested readers can then work through the math if they want to.",
"Also, some plots/visualizations of the loss surface given in Equations 4 and 5 would be very helpful.",
"Also, although I found their work to be interesting after finishing the paper, I was initially confused by how the authors frame their work and where the paper was heading.",
"They claim their contribution is in the analysis of loss surfaces (true) and neural nets applied to graph-structured inputs.",
"This second part was confusing - although the maze can be viewed as a graph, many other works apply ConvNets to maze environments [1, 2, 3], and their work has little relation to other work on graph CNNs.",
"Here the assumptions of locality and stationarity underlying CNNs are sensible and I don*t think the first paragraph in Section 3 justifying the use of the CNN on the maze environment is necessary.",
"However, I think it would make much more sense to mention how their work relates to other neural network architectures which learn algorithms (such as the Neural Turing Machine and variants) or reasoning tasks more generally (for example, memory-augmented networks applied to the bAbI tasks).",
"There are lots of small typos, please fix them.",
"Here are a few: - \"For L=16, batch size of 20, ...\": not a complete sentence.",
"- Right before 6.1.1: \"when the these such\" -> \"when such\" - Top of page 8: \"it also have a\" -> \"it also has a\", \"when encountering larger dataset\" -> \"...datasets\" - First sentence of 6.2: \"we turn to the discuss a second\" -> \"we turn to the discussion of a second\" - etc.",
"Quality: High Clarity: medium-low Originality: high Significance: medium-high References: [1] https://arxiv.org/pdf/1602.02867.pdf [2] https://arxiv.org/pdf/1612.08810.pdf [3] https://arxiv.org/pdf/1707.03497.pdf"
] | {
"criticism": 5,
"example": 4,
"importance_and_relevance": 7,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 10,
"results_and_discussion": 7,
"suggestion_and_solution": 11,
"total": 27
} | 0.185185 | 0.148148 | 0.259259 | 0.407407 | 0.111111 | 0.37037 | 0.259259 | 0.407407 | 27 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.148148 | -0.945208 | 3.093356 |
Information Theoretic Co-Training | This paper introduces an information theoretic co-training objective for unsupervised learning. We consider the problem of predicting the future. Rather than predict future sensations (image pixels or sound waves) we predict ``hypotheses** to be confirmed by future sensations. More formally, we assume a population distribution on pairs where we can think of as a past sensation and as a future sensation. We train both a predictor model and a confirmation model where we view as hypotheses (when predicted) or facts (when confirmed). For a population distribution on pairs we focus on the problem of measuring the mutual information between and . By the data processing inequality this mutual information is at least as large as the mutual information between and under the distribution on triples defined by the confirmation model . The information theoretic training objective for and can be viewed as a form of co-training where we want the prediction from to match the confirmation from . We give experiments on applications to learning phonetics on the TIMIT dataset. | This paper presents a novel algorithm for unsupervised learning based on information theoretic co-training. The objective is to find parameters that maximize the mutual information between x and y through hidden variable z without computing the probabilities of x and y directly. This means that this technique allows to find a structure from data in an unsupervised way without modeling x and y explicitly. The proposed method is applied to unsupervised learning of phonemes on TIMIT corpus. The experimental results show that the learnt representations correspond to the labeled phonemes 35.1% on all frames. The idea looks original and the formulation is presented precisely. Comprehensive comparisons to related models are given. The experimental results are not very strong, but the authors claim that they are initial experiments and can be improved with just a little more experimentation. The experiments could be improved to show the effectiveness of the proposed approach better. The experimental results show that it can learn something but it is difficult to know how good the number is. Probably, it is useful to compare the method with some simple baseline even if it is not the objective of the paper to achieve high accuracy. Also, I am not sure if the phoneme discovery is the best task for the experiments. If the purpose is to discover phoneme as accurate as possible, probably, one might take another approach. Probably, it is better to emphasize that this is just to show the ability of the approach. It will be useful to show a list of possible applications. If more experiments can be added with such applications, it will be great. I would recommend to have a figure to explain the procedure of the algorithm, i.e., how to update phi and psi. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInformation Theoretic Co-Training\nThis paper introduces an information theoretic co-training objective for unsupervised learning. We consider the problem of predicting the future. Rather than predict future sensations (image pixels or sound waves) we predict ``hypotheses** to be confirmed by future sensations. More formally, we assume a population distribution on pairs where we can think of as a past sensation and as a future sensation. We train both a predictor model and a confirmation model where we view as hypotheses (when predicted) or facts (when confirmed). For a population distribution on pairs we focus on the problem of measuring the mutual information between and . By the data processing inequality this mutual information is at least as large as the mutual information between and under the distribution on triples defined by the confirmation model . The information theoretic training objective for and can be viewed as a form of co-training where we want the prediction from to match the confirmation from . We give experiments on applications to learning phonetics on the TIMIT dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents a novel algorithm for unsupervised learning based on information theoretic co-training. The objective is to find parameters that maximize the mutual information between x and y through hidden variable z without computing the probabilities of x and y directly. This means that this technique allows to find a structure from data in an unsupervised way without modeling x and y explicitly. The proposed method is applied to unsupervised learning of phonemes on TIMIT corpus. The experimental results show that the learnt representations correspond to the labeled phonemes 35.1% on all frames. The idea looks original and the formulation is presented precisely. Comprehensive comparisons to related models are given. The experimental results are not very strong, but the authors claim that they are initial experiments and can be improved with just a little more experimentation. The experiments could be improved to show the effectiveness of the proposed approach better. The experimental results show that it can learn something but it is difficult to know how good the number is. Probably, it is useful to compare the method with some simple baseline even if it is not the objective of the paper to achieve high accuracy. Also, I am not sure if the phoneme discovery is the best task for the experiments. If the purpose is to discover phoneme as accurate as possible, probably, one might take another approach. Probably, it is better to emphasize that this is just to show the ability of the approach. It will be useful to show a list of possible applications. If more experiments can be added with such applications, it will be great. I would recommend to have a figure to explain the procedure of the algorithm, i.e., how to update phi and psi.",
"role": "assistant"
}
]
| [
" This paper presents a novel algorithm for unsupervised learning based on information theoretic co-training.",
"The objective is to find parameters that maximize the mutual information between x and y through hidden variable z without computing the probabilities of x and y directly.",
"This means that this technique allows to find a structure from data in an unsupervised way without modeling x and y explicitly.",
"The proposed method is applied to unsupervised learning of phonemes on TIMIT corpus.",
"The experimental results show that the learnt representations correspond to the labeled phonemes 35.1% on all frames.",
"The idea looks original and the formulation is presented precisely.",
"Comprehensive comparisons to related models are given.",
"The experimental results are not very strong, but the authors claim that they are initial experiments and can be improved with just a little more experimentation.",
"The experiments could be improved to show the effectiveness of the proposed approach better.",
"The experimental results show that it can learn something but it is difficult to know how good the number is.",
"Probably, it is useful to compare the method with some simple baseline even if it is not the objective of the paper to achieve high accuracy.",
"Also, I am not sure if the phoneme discovery is the best task for the experiments.",
"If the purpose is to discover phoneme as accurate as possible, probably, one might take another approach.",
"Probably, it is better to emphasize that this is just to show the ability of the approach.",
"It will be useful to show a list of possible applications.",
"If more experiments can be added with such applications, it will be great.",
"I would recommend to have a figure to explain the procedure of the algorithm, i.e., how to update phi and psi."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 13,
"praise": 2,
"presentation_and_reporting": 7,
"results_and_discussion": 5,
"suggestion_and_solution": 8,
"total": 17
} | 0.117647 | 0 | 0.058824 | 0.764706 | 0.117647 | 0.411765 | 0.294118 | 0.470588 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.235294 | 1.982775 | 0.252519 |
Withdraw | withdraw | The paper conducts empirical comparisons between well-known architectures for sequence labeling. Specifically, it compares LSTM-CNN-CRF (Ma and Hovy, 2016) and LSTM-CRF (Lample et al., 2016), combined with character-based architectures based on CNN or LSTM, and also different regularization techniques including a new word-level dropout scheme. Quality: Low, minor/incremental contribution due to lack of novelty and lack of significantly useful findings. Clarity: Readable with some typos (e.g., "let the model being aware..."). Experimental results are thoroughly tabulated. Originality: Very low. The finding that LSTMs have advantages over CNNs in modeling characters is not new (see the last paragraph of Section 4.1 of Lample et al.), though it*s good to have empirical verification. One of the only novel components is the word-wise dropout (Table 5), but the improvement is minor and the technique is too thin to be considered substantial novelty. Significance: The reported findings will be useful for practitioners who are specifically interested in incrementally improving sequence labeling performance. Otherwise, there isn*t much depth. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWithdraw\nwithdraw",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper conducts empirical comparisons between well-known architectures for sequence labeling. Specifically, it compares LSTM-CNN-CRF (Ma and Hovy, 2016) and LSTM-CRF (Lample et al., 2016), combined with character-based architectures based on CNN or LSTM, and also different regularization techniques including a new word-level dropout scheme. Quality: Low, minor/incremental contribution due to lack of novelty and lack of significantly useful findings. Clarity: Readable with some typos (e.g., \"let the model being aware...\"). Experimental results are thoroughly tabulated. Originality: Very low. The finding that LSTMs have advantages over CNNs in modeling characters is not new (see the last paragraph of Section 4.1 of Lample et al.), though it*s good to have empirical verification. One of the only novel components is the word-wise dropout (Table 5), but the improvement is minor and the technique is too thin to be considered substantial novelty. Significance: The reported findings will be useful for practitioners who are specifically interested in incrementally improving sequence labeling performance. Otherwise, there isn*t much depth.",
"role": "assistant"
}
]
| [
" The paper conducts empirical comparisons between well-known architectures for sequence labeling.",
"Specifically, it compares LSTM-CNN-CRF (Ma and Hovy, 2016) and LSTM-CRF (Lample et al., 2016), combined with character-based architectures based on CNN or LSTM, and also different regularization techniques including a new word-level dropout scheme.",
"Quality: Low, minor/incremental contribution due to lack of novelty and lack of significantly useful findings.",
"Clarity: Readable with some typos (e.g., \"let the model being aware...\").",
"Experimental results are thoroughly tabulated.",
"Originality: Very low.",
"The finding that LSTMs have advantages over CNNs in modeling characters is not new (see the last paragraph of Section 4.1 of Lample et al.",
"), though it*s good to have empirical verification.",
"One of the only novel components is the word-wise dropout (Table 5), but the improvement is minor and the technique is too thin to be considered substantial novelty.",
"Significance: The reported findings will be useful for practitioners who are specifically interested in incrementally improving sequence labeling performance.",
"Otherwise, there isn*t much depth."
] | {
"criticism": 5,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 5,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 11
} | 0.454545 | 0.181818 | 0.272727 | 0.636364 | 0.181818 | 0.454545 | 0.272727 | 0.181818 | 11 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.636364 | 2.573234 | 0.06313 |
Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering | Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers. Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered. The above observations raise the problem of evidence aggregation from multiple passages. In this paper, we deal with this problem as answer re-ranking. Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question. Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\% improvement on the former two datasets. | The paper is clear, although there are many English mistakes (that should be corrected). The proposed method aggregates answers from multiple passages in the context of QA. The new method is motivated well and departs from prior work. Experiments on three datasets show the proposed method to be notably better than several baselines (although two of the baselines, GA and BiDAF, appear tremendously weak). The analysis of the results is interesting and largely convincing, although a more dedicated error analysis or discussion of the limitation of the proposed approach would be welcome. Minor point: in the description of Quasar-T, the IR model is described as lucene index. An index is not an IR model. Lucene is an IR system that implements various IR models. The terminology should be corrected here. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEvidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering\nVery recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers. Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered. The above observations raise the problem of evidence aggregation from multiple passages. In this paper, we deal with this problem as answer re-ranking. Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question. Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\\% improvement on the former two datasets.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper is clear, although there are many English mistakes (that should be corrected). The proposed method aggregates answers from multiple passages in the context of QA. The new method is motivated well and departs from prior work. Experiments on three datasets show the proposed method to be notably better than several baselines (although two of the baselines, GA and BiDAF, appear tremendously weak). The analysis of the results is interesting and largely convincing, although a more dedicated error analysis or discussion of the limitation of the proposed approach would be welcome. Minor point: in the description of Quasar-T, the IR model is described as lucene index. An index is not an IR model. Lucene is an IR system that implements various IR models. The terminology should be corrected here.",
"role": "assistant"
}
]
| [
" The paper is clear, although there are many English mistakes (that should be corrected).",
"The proposed method aggregates answers from multiple passages in the context of QA.",
"The new method is motivated well and departs from prior work.",
"Experiments on three datasets show the proposed method to be notably better than several baselines (although two of the baselines, GA and BiDAF, appear tremendously weak).",
"The analysis of the results is interesting and largely convincing, although a more dedicated error analysis or discussion of the limitation of the proposed approach would be welcome.",
"Minor point: in the description of Quasar-T, the IR model is described as lucene index.",
"An index is not an IR model.",
"Lucene is an IR system that implements various IR models.",
"The terminology should be corrected here."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 7,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 9
} | 0.333333 | 0 | 0.222222 | 0.777778 | 0.333333 | 0.222222 | 0.111111 | 0.333333 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.080814 | 0.252519 |
Action-dependent Control Variates for Policy Optimization via Stein Identity | Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from the large variance issue on policy gradient estimation, which leads to poor sample efficiency during training. In this work, we propose a control variate method to effectively reduce variance for policy gradient methods. Motivated by the Stein’s identity, our method extends the previous control variate methods used in REINFORCE and advantage actor-critic by introducing more flexible and general action-dependent baseline functions. Empirical studies show that our method essentially improves the sample efficiency of the state-of-the-art policy gradient approaches. | The paper proposes action-dependent baselines for reducing variance in policy gradient, through the derivation based on Stein’s identity and control functionals. The method relates closely to prior work on action-dependent baselines, but explores in particular on-policy fitting and a few other design choices that empirically improve the performance. A criticism of the paper is that it does not require Stein’s identity/control functionals literature to derive Eq. 8, since it can be derived similarly to linear control variate and it has also previously been discussed in IPG [Gu et. al., 2017] as reparameterizable control variate. The derivation through Stein’s identity does not seem to provide additional insights/algorithm designs beyond direct derivation through reparameterization trick. The empirical results appear promising, and in particular in comparison with Q-Prop, which fits Q-function using off-policy TD learning. However, the discussion on the causes of the difference should be elaborated much more, as it appears there are substantial differences besides on-policy/off-policy fitting of the Q, such as: -FitLinear fits linear Q (through parameterization based on linearization of Q) using on-policy learning, rather than fitting nonlinear Q and then at application time linearize around the mean action. A closer comparison would be to use same locally linear Q function for off-policy learning in Q-Prop. -The use of on-policy fitted value baseline within Q-function parameterization during on-policy fitting is nice. Similar comparison should be done with off-policy fitting in Q-Prop. I wonder if on-policy fitting of Q can be elaborated more. Specifically, on-policy fitting of V seems to require a few design details to have best performance [GAE, Schulman et. al., 2016]: fitting on previous batch instead of current batch to avoid overfitting (this is expected for your method as well, since by fitting to current batch the control variate then depends nontrivially on samples that are being applied), and possible use of trust-region regularization to prevent V from changing too much across iterations. The paper presents promising results with direct on-policy fitting of action-dependent baseline, which is promising since it does not require long training iterations as in off-policy fitting in Q-Prop. As discussed above, it is encouraged to elaborate other potential causes that led to performance differences. The experimental results are presented well for a range of Mujoco tasks. Pros: -Simple, effective method that appears readily available to be incorporated to any on-policy PG methods without significantly increase in computational time -Good empirical evaluation Cons: -The name Stein control variate seems misleading since the algorithm/method does not rely on derivation through Stein’s identity etc. and does not inherit novel insights due to this derivation. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAction-dependent Control Variates for Policy Optimization via Stein Identity\nPolicy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from the large variance issue on policy gradient estimation, which leads to poor sample efficiency during training. In this work, we propose a control variate method to effectively reduce variance for policy gradient methods. Motivated by the Stein’s identity, our method extends the previous control variate methods used in REINFORCE and advantage actor-critic by introducing more flexible and general action-dependent baseline functions. Empirical studies show that our method essentially improves the sample efficiency of the state-of-the-art policy gradient approaches.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes action-dependent baselines for reducing variance in policy gradient, through the derivation based on Stein’s identity and control functionals. The method relates closely to prior work on action-dependent baselines, but explores in particular on-policy fitting and a few other design choices that empirically improve the performance. A criticism of the paper is that it does not require Stein’s identity/control functionals literature to derive Eq. 8, since it can be derived similarly to linear control variate and it has also previously been discussed in IPG [Gu et. al., 2017] as reparameterizable control variate. The derivation through Stein’s identity does not seem to provide additional insights/algorithm designs beyond direct derivation through reparameterization trick. The empirical results appear promising, and in particular in comparison with Q-Prop, which fits Q-function using off-policy TD learning. However, the discussion on the causes of the difference should be elaborated much more, as it appears there are substantial differences besides on-policy/off-policy fitting of the Q, such as: -FitLinear fits linear Q (through parameterization based on linearization of Q) using on-policy learning, rather than fitting nonlinear Q and then at application time linearize around the mean action. A closer comparison would be to use same locally linear Q function for off-policy learning in Q-Prop. -The use of on-policy fitted value baseline within Q-function parameterization during on-policy fitting is nice. Similar comparison should be done with off-policy fitting in Q-Prop. I wonder if on-policy fitting of Q can be elaborated more. Specifically, on-policy fitting of V seems to require a few design details to have best performance [GAE, Schulman et. al., 2016]: fitting on previous batch instead of current batch to avoid overfitting (this is expected for your method as well, since by fitting to current batch the control variate then depends nontrivially on samples that are being applied), and possible use of trust-region regularization to prevent V from changing too much across iterations. The paper presents promising results with direct on-policy fitting of action-dependent baseline, which is promising since it does not require long training iterations as in off-policy fitting in Q-Prop. As discussed above, it is encouraged to elaborate other potential causes that led to performance differences. The experimental results are presented well for a range of Mujoco tasks. Pros: -Simple, effective method that appears readily available to be incorporated to any on-policy PG methods without significantly increase in computational time -Good empirical evaluation Cons: -The name Stein control variate seems misleading since the algorithm/method does not rely on derivation through Stein’s identity etc. and does not inherit novel insights due to this derivation.",
"role": "assistant"
}
]
| [
" The paper proposes action-dependent baselines for reducing variance in policy gradient, through the derivation based on Stein’s identity and control functionals.",
"The method relates closely to prior work on action-dependent baselines, but explores in particular on-policy fitting and a few other design choices that empirically improve the performance.",
"A criticism of the paper is that it does not require Stein’s identity/control functionals literature to derive Eq.",
"8, since it can be derived similarly to linear control variate and it has also previously been discussed in IPG [Gu et.",
"al., 2017] as reparameterizable control variate.",
"The derivation through Stein’s identity does not seem to provide additional insights/algorithm designs beyond direct derivation through reparameterization trick.",
"The empirical results appear promising, and in particular in comparison with Q-Prop, which fits Q-function using off-policy TD learning.",
"However, the discussion on the causes of the difference should be elaborated much more, as it appears there are substantial differences besides on-policy/off-policy fitting of the Q, such as: -FitLinear fits linear Q (through parameterization based on linearization of Q) using on-policy learning, rather than fitting nonlinear Q and then at application time linearize around the mean action.",
"A closer comparison would be to use same locally linear Q function for off-policy learning in Q-Prop.",
"-The use of on-policy fitted value baseline within Q-function parameterization during on-policy fitting is nice.",
"Similar comparison should be done with off-policy fitting in Q-Prop.",
"I wonder if on-policy fitting of Q can be elaborated more.",
"Specifically, on-policy fitting of V seems to require a few design details to have best performance [GAE, Schulman et.",
"al., 2016]: fitting on previous batch instead of current batch to avoid overfitting (this is expected for your method as well, since by fitting to current batch the control variate then depends nontrivially on samples that are being applied), and possible use of trust-region regularization to prevent V from changing too much across iterations.",
"The paper presents promising results with direct on-policy fitting of action-dependent baseline, which is promising since it does not require long training iterations as in off-policy fitting in Q-Prop.",
"As discussed above, it is encouraged to elaborate other potential causes that led to performance differences.",
"The experimental results are presented well for a range of Mujoco tasks.",
"Pros: -Simple, effective method that appears readily available to be incorporated to any on-policy PG methods without significantly increase in computational time -Good empirical evaluation Cons: -The name Stein control variate seems misleading since the algorithm/method does not rely on derivation through Stein’s identity etc.",
"and does not inherit novel insights due to this derivation."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 17,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 8,
"suggestion_and_solution": 7,
"total": 19
} | 0.210526 | 0 | 0.052632 | 0.894737 | 0.210526 | 0.157895 | 0.421053 | 0.368421 | 19 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.315789 | 1.747622 | 0.568167 |
NerveNet: Learning Structured Policy with Graph Neural Networks | We address the problem of learning structured policies for continuous control. In traditional reinforcement learning, policies of agents are learned by MLPs which take the concatenation of all observations from the environment as input for predicting actions. In this work, we propose NerveNet to explicitly model the structure of an agent, which naturally takes the form of a graph. Specifically, serving as the agent*s policy network, NerveNet first propagates information over the structure of the agent and then predict actions for different parts of the agent. In the experiments, we first show that our NerveNet is comparable to state-of-the-art methods on standard MuJoCo environments. We further propose our customized reinforcement learning environments for benchmarking two types of structure transfer learning tasks, i.e., size and disability transfer. We demonstrate that policies learned by NerveNet are significantly better than policies learned by other models and are able to transfer even in a zero-shot setting. | The authors present an interesting application of Graph Neural Networks to learning policies for controlling "centipede" robots of different lengths. They leverage the non-parametric nature of graph neural networks to show that their approach is capable of transferring policies to different robots more quickly than other approaches. The significance of this work is in its application of GNNs to a potentially practical problem in the robotics domain. The paper suffers from some clarity/presentation issues that will need to be improved. Ultimately, the contribution of this paper is rather specific, yet the authors show the clear advantage of their technique for improved performance and transfer learning on some agent types within this domain. Some comments: - Significant: A brief statement of the paper*s "contributions" is also needed; it is unclear at first glance what portions of the work are the authors* own contributions versus prior work, particularly in the section describing the GNN theory. - Abstract: I take issue with the phrase "are significantly better than policies learned by other models", since this is not universally true. While there is a clear benefit to their technique for the centipede and snake models, the performance on the other agents is mostly comparable, rather than "significantly better"; this should be reflected in the abstract. - Figure 1 is instructive, but another figure is needed to better illustrate the algorithm (including how the state of the world is mapped to the graph state h, how these "message" are passed between nodes, and how the final graph states are used to develop a policy). This would greatly help clarity, particularly for those who have not seen GNNs before, and would make the paper more self-contained and easier to follow. The figure could also include some annotated examples of the input spaces of the different joints, etc. Relatedly, Sec. 2.2.2 is rather difficult to follow because of the lack of a figure or concrete example (an example might help the reader understand the procedure without having to develop an intuition for GNNs). - There is almost certainly a typo in Eq. (4), since it does not contain the aggregated message .bar{m}_u^t. Smaller issues / typos: - Abstract: please spell out spell out multi-layer perceptrons (MLP). - Sec 2.2: "servers" should be "serves" - "performance By" on page 4 is missing a "." Pros: - The paper presents an interesting application of GNNs to the space of reinforcement learning and clearly show the benefits of their approach for the specific task of transfer learning. - To the best of my knowledge, the paper presents an original result and presents a good-faith effort to compare to existing, alternative systems (showing that they outperform on the tasks of interest). Cons: - The contributions of the paper should be more clearly stated (see comment above). - The section describing their approach is not "self contained" and is difficult for an unlearned reader to follow. - The problem the authors have chosen to tackle is perhaps a bit "specific", since the performance of their approach is only really shown to exceed the performance on agents, like centipedes or snakes, which have this "modular" quality. I certainly hope the authors improve the quality of the theory section; the poor presentation here brings down the rest of the paper, which is otherwise an easy read. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNerveNet: Learning Structured Policy with Graph Neural Networks\nWe address the problem of learning structured policies for continuous control. In traditional reinforcement learning, policies of agents are learned by MLPs which take the concatenation of all observations from the environment as input for predicting actions. In this work, we propose NerveNet to explicitly model the structure of an agent, which naturally takes the form of a graph. Specifically, serving as the agent*s policy network, NerveNet first propagates information over the structure of the agent and then predict actions for different parts of the agent. In the experiments, we first show that our NerveNet is comparable to state-of-the-art methods on standard MuJoCo environments. We further propose our customized reinforcement learning environments for benchmarking two types of structure transfer learning tasks, i.e., size and disability transfer. We demonstrate that policies learned by NerveNet are significantly better than policies learned by other models and are able to transfer even in a zero-shot setting.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors present an interesting application of Graph Neural Networks to learning policies for controlling \"centipede\" robots of different lengths. They leverage the non-parametric nature of graph neural networks to show that their approach is capable of transferring policies to different robots more quickly than other approaches. The significance of this work is in its application of GNNs to a potentially practical problem in the robotics domain. The paper suffers from some clarity/presentation issues that will need to be improved. Ultimately, the contribution of this paper is rather specific, yet the authors show the clear advantage of their technique for improved performance and transfer learning on some agent types within this domain. Some comments: - Significant: A brief statement of the paper*s \"contributions\" is also needed; it is unclear at first glance what portions of the work are the authors* own contributions versus prior work, particularly in the section describing the GNN theory. - Abstract: I take issue with the phrase \"are significantly better than policies learned by other models\", since this is not universally true. While there is a clear benefit to their technique for the centipede and snake models, the performance on the other agents is mostly comparable, rather than \"significantly better\"; this should be reflected in the abstract. - Figure 1 is instructive, but another figure is needed to better illustrate the algorithm (including how the state of the world is mapped to the graph state h, how these \"message\" are passed between nodes, and how the final graph states are used to develop a policy). This would greatly help clarity, particularly for those who have not seen GNNs before, and would make the paper more self-contained and easier to follow. The figure could also include some annotated examples of the input spaces of the different joints, etc. Relatedly, Sec. 2.2.2 is rather difficult to follow because of the lack of a figure or concrete example (an example might help the reader understand the procedure without having to develop an intuition for GNNs). - There is almost certainly a typo in Eq. (4), since it does not contain the aggregated message .bar{m}_u^t. Smaller issues / typos: - Abstract: please spell out spell out multi-layer perceptrons (MLP). - Sec 2.2: \"servers\" should be \"serves\" - \"performance By\" on page 4 is missing a \".\" Pros: - The paper presents an interesting application of GNNs to the space of reinforcement learning and clearly show the benefits of their approach for the specific task of transfer learning. - To the best of my knowledge, the paper presents an original result and presents a good-faith effort to compare to existing, alternative systems (showing that they outperform on the tasks of interest). Cons: - The contributions of the paper should be more clearly stated (see comment above). - The section describing their approach is not \"self contained\" and is difficult for an unlearned reader to follow. - The problem the authors have chosen to tackle is perhaps a bit \"specific\", since the performance of their approach is only really shown to exceed the performance on agents, like centipedes or snakes, which have this \"modular\" quality. I certainly hope the authors improve the quality of the theory section; the poor presentation here brings down the rest of the paper, which is otherwise an easy read.",
"role": "assistant"
}
]
| [
" The authors present an interesting application of Graph Neural Networks to learning policies for controlling \"centipede\" robots of different lengths.",
"They leverage the non-parametric nature of graph neural networks to show that their approach is capable of transferring policies to different robots more quickly than other approaches.",
"The significance of this work is in its application of GNNs to a potentially practical problem in the robotics domain.",
"The paper suffers from some clarity/presentation issues that will need to be improved.",
"Ultimately, the contribution of this paper is rather specific, yet the authors show the clear advantage of their technique for improved performance and transfer learning on some agent types within this domain.",
"Some comments: - Significant: A brief statement of the paper*s \"contributions\" is also needed; it is unclear at first glance what portions of the work are the authors* own contributions versus prior work, particularly in the section describing the GNN theory.",
"- Abstract: I take issue with the phrase \"are significantly better than policies learned by other models\", since this is not universally true.",
"While there is a clear benefit to their technique for the centipede and snake models, the performance on the other agents is mostly comparable, rather than \"significantly better\"; this should be reflected in the abstract.",
"- Figure 1 is instructive, but another figure is needed to better illustrate the algorithm (including how the state of the world is mapped to the graph state h, how these \"message\" are passed between nodes, and how the final graph states are used to develop a policy).",
"This would greatly help clarity, particularly for those who have not seen GNNs before, and would make the paper more self-contained and easier to follow.",
"The figure could also include some annotated examples of the input spaces of the different joints, etc.",
"Relatedly, Sec.",
"2.2.2 is rather difficult to follow because of the lack of a figure or concrete example (an example might help the reader understand the procedure without having to develop an intuition for GNNs).",
"- There is almost certainly a typo in Eq.",
"(4), since it does not contain the aggregated message .bar{m}_u^t.",
"Smaller issues / typos: - Abstract: please spell out spell out multi-layer perceptrons (MLP).",
"- Sec 2.2: \"servers\" should be \"serves\" - \"performance By\" on page 4 is missing a \".\"",
"Pros: - The paper presents an interesting application of GNNs to the space of reinforcement learning and clearly show the benefits of their approach for the specific task of transfer learning.",
"- To the best of my knowledge, the paper presents an original result and presents a good-faith effort to compare to existing, alternative systems (showing that they outperform on the tasks of interest).",
"Cons: - The contributions of the paper should be more clearly stated (see comment above).",
"- The section describing their approach is not \"self contained\" and is difficult for an unlearned reader to follow.",
"- The problem the authors have chosen to tackle is perhaps a bit \"specific\", since the performance of their approach is only really shown to exceed the performance on agents, like centipedes or snakes, which have this \"modular\" quality.",
"I certainly hope the authors improve the quality of the theory section; the poor presentation here brings down the rest of the paper, which is otherwise an easy read."
] | {
"criticism": 7,
"example": 4,
"importance_and_relevance": 5,
"materials_and_methods": 8,
"praise": 5,
"presentation_and_reporting": 14,
"results_and_discussion": 1,
"suggestion_and_solution": 11,
"total": 23
} | 0.304348 | 0.173913 | 0.217391 | 0.347826 | 0.217391 | 0.608696 | 0.043478 | 0.478261 | 23 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.391304 | 0.813061 | 1.578243 |
Distributional Adversarial Networks | In most current formulations of adversarial training, the discriminators can be expressed as single-input operators, that is, the mapping they define is separable over observations. In this work, we argue that this property might help explain the infamous mode collapse phenomenon in adversarially-trained generative models. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose distributional adversaries that operate on samples, i.e., on sets of multiple points drawn from a distribution, rather than on single observations. We show how they can be easily implemented on top of existing models. Various experimental results show that generators trained in combination with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with observation-wise prediction discriminators. In addition, the application of our framework to domain adaptation results in strong improvement over recent state-of-the-art. | The paper proposes to replace single-sample discriminators in adversarial training with discriminators that explicitly operate on distributions of examples, so as to incentivize the generator to cover the full distribution of the training data and not collapse to isolated modes. The idea of avoiding mode collapse by providing multiple samples to the discriminator is not new; the paper acknowledges prior work on minibatch discrimination but does not really describe the differences with previous work in any technical detail. Not being highly familiar with this literature, my reading is that the scheme in this paper grounds out into a somewhat different architecture than previous minibatch discriminators, with a nice interpretation in terms of a sample-based approximation to a neural mean embedding. However the paper does not provide any empirical evidence that their approach actually works better than previous approaches to minibatch discrimination. By comparing only to one-sample discriminators it leaves open the (a priori quite plausible) possibility that minibatch discrimination is generally a good idea but that other architectures might work equally well or better, i.e., the experiments do not demonstrate that the MMD machinery that forms the core of the paper has any real purchase. The paper also proposes a two-sample objective DAN-2S, in which the discriminator is asked to classify two sets of samples as coming from the same or different distributions. This is an interesting approach, although empirically it does not appear to have any advantage over the simpler DAN-S -- do the authors agree with this interpretation? If so it is still a worthwhile negative result, but the paper should make this conclusion explicit. Alternately if there are cases when the two-sample test is actually recommended, that should be made explicit as well. Overall this paper seems borderline -- a nice theoretical story, grounding out into a simple architecture that does seem to work in practice (the domain adaptation results are promising), but with somewhat sloppy writing and experimentation that doesn*t clearly demonstrate the value of the proposed approach. I hope the authors continue to improve the paper by comparing to other minibatch discrimination techniques. It would also be helpful to see value on a real-world task where mode collapse is explicitly seen as a problem (and/or to provide some intuition for why this would be the case in the Amazon reviews dataset). Specific comments: - Eqn (2.2) is described as representing the limit of a converged discriminator, but it looks like this is just the general gradient of the objective --- where does D* enter into the picture? - Fig 1: the label R is never explained; why not just use P_x? - Section 5.1 "we use the pure distributional objective for DAN (i.e., setting λ != 0 in (3.5))" should this be λ = 0? - "Results" in the domain adaptation experiments are not clearly explained -- what do the reported numbers represent? (presumably accuracy(stddev) but the figure caption should say this). It is also silly to report accuracy to 2 decimal places when they are clearly not significant at that level. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDistributional Adversarial Networks\nIn most current formulations of adversarial training, the discriminators can be expressed as single-input operators, that is, the mapping they define is separable over observations. In this work, we argue that this property might help explain the infamous mode collapse phenomenon in adversarially-trained generative models. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose distributional adversaries that operate on samples, i.e., on sets of multiple points drawn from a distribution, rather than on single observations. We show how they can be easily implemented on top of existing models. Various experimental results show that generators trained in combination with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with observation-wise prediction discriminators. In addition, the application of our framework to domain adaptation results in strong improvement over recent state-of-the-art.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes to replace single-sample discriminators in adversarial training with discriminators that explicitly operate on distributions of examples, so as to incentivize the generator to cover the full distribution of the training data and not collapse to isolated modes. The idea of avoiding mode collapse by providing multiple samples to the discriminator is not new; the paper acknowledges prior work on minibatch discrimination but does not really describe the differences with previous work in any technical detail. Not being highly familiar with this literature, my reading is that the scheme in this paper grounds out into a somewhat different architecture than previous minibatch discriminators, with a nice interpretation in terms of a sample-based approximation to a neural mean embedding. However the paper does not provide any empirical evidence that their approach actually works better than previous approaches to minibatch discrimination. By comparing only to one-sample discriminators it leaves open the (a priori quite plausible) possibility that minibatch discrimination is generally a good idea but that other architectures might work equally well or better, i.e., the experiments do not demonstrate that the MMD machinery that forms the core of the paper has any real purchase. The paper also proposes a two-sample objective DAN-2S, in which the discriminator is asked to classify two sets of samples as coming from the same or different distributions. This is an interesting approach, although empirically it does not appear to have any advantage over the simpler DAN-S -- do the authors agree with this interpretation? If so it is still a worthwhile negative result, but the paper should make this conclusion explicit. Alternately if there are cases when the two-sample test is actually recommended, that should be made explicit as well. Overall this paper seems borderline -- a nice theoretical story, grounding out into a simple architecture that does seem to work in practice (the domain adaptation results are promising), but with somewhat sloppy writing and experimentation that doesn*t clearly demonstrate the value of the proposed approach. I hope the authors continue to improve the paper by comparing to other minibatch discrimination techniques. It would also be helpful to see value on a real-world task where mode collapse is explicitly seen as a problem (and/or to provide some intuition for why this would be the case in the Amazon reviews dataset). Specific comments: - Eqn (2.2) is described as representing the limit of a converged discriminator, but it looks like this is just the general gradient of the objective --- where does D* enter into the picture? - Fig 1: the label R is never explained; why not just use P_x? - Section 5.1 \"we use the pure distributional objective for DAN (i.e., setting λ != 0 in (3.5))\" should this be λ = 0? - \"Results\" in the domain adaptation experiments are not clearly explained -- what do the reported numbers represent? (presumably accuracy(stddev) but the figure caption should say this). It is also silly to report accuracy to 2 decimal places when they are clearly not significant at that level.",
"role": "assistant"
}
]
| [
" The paper proposes to replace single-sample discriminators in adversarial training with discriminators that explicitly operate on distributions of examples, so as to incentivize the generator to cover the full distribution of the training data and not collapse to isolated modes.",
"The idea of avoiding mode collapse by providing multiple samples to the discriminator is not new; the paper acknowledges prior work on minibatch discrimination but does not really describe the differences with previous work in any technical detail.",
"Not being highly familiar with this literature, my reading is that the scheme in this paper grounds out into a somewhat different architecture than previous minibatch discriminators, with a nice interpretation in terms of a sample-based approximation to a neural mean embedding.",
"However the paper does not provide any empirical evidence that their approach actually works better than previous approaches to minibatch discrimination.",
"By comparing only to one-sample discriminators it leaves open the (a priori quite plausible) possibility that minibatch discrimination is generally a good idea but that other architectures might work equally well or better, i.e., the experiments do not demonstrate that the MMD machinery that forms the core of the paper has any real purchase.",
"The paper also proposes a two-sample objective DAN-2S, in which the discriminator is asked to classify two sets of samples as coming from the same or different distributions.",
"This is an interesting approach, although empirically it does not appear to have any advantage over the simpler DAN-S -- do the authors agree with this interpretation?",
"If so it is still a worthwhile negative result, but the paper should make this conclusion explicit.",
"Alternately if there are cases when the two-sample test is actually recommended, that should be made explicit as well.",
"Overall this paper seems borderline -- a nice theoretical story, grounding out into a simple architecture that does seem to work in practice (the domain adaptation results are promising), but with somewhat sloppy writing and experimentation that doesn*t clearly demonstrate the value of the proposed approach.",
"I hope the authors continue to improve the paper by comparing to other minibatch discrimination techniques.",
"It would also be helpful to see value on a real-world task where mode collapse is explicitly seen as a problem (and/or to provide some intuition for why this would be the case in the Amazon reviews dataset).",
"Specific comments: - Eqn (2.2) is described as representing the limit of a converged discriminator, but it looks like this is just the general gradient of the objective --- where does D* enter into the picture?",
"- Fig 1: the label R is never explained; why not just use P_x?",
"- Section 5.1 \"we use the pure distributional objective for DAN (i.e., setting λ != 0 in (3.5))\" should this be λ = 0?",
"- \"Results\" in the domain adaptation experiments are not clearly explained -- what do the reported numbers represent?",
"(presumably accuracy(stddev) but the figure caption should say this).",
"It is also silly to report accuracy to 2 decimal places when they are clearly not significant at that level."
] | {
"criticism": 8,
"example": 4,
"importance_and_relevance": 3,
"materials_and_methods": 13,
"praise": 5,
"presentation_and_reporting": 7,
"results_and_discussion": 5,
"suggestion_and_solution": 5,
"total": 18
} | 0.444444 | 0.222222 | 0.166667 | 0.722222 | 0.277778 | 0.388889 | 0.277778 | 0.277778 | 18 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.777778 | 2.383217 | 0.394561 |
Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting | Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large-scale road network traffic datasets and observe consistent improvement of 12% - 15% over state-of-the-art baselines | Summary of the reviews: Pros: • A Novel Diffusion Convolutional Recurrent Neural Network framework for traffic forecasting • Apply bidirectional random walks with nice theoretical analysis to capture the spatial dependency • Novel applications of sequence to sequence architecture and the scheduled sampling technique into modeling the temporal dependency in the traffic domain • This work is very well written and easy to follow Cons: • Needs some minor re-organization of contents between the main sections and appendix Detailed comments: D1: Some minor re-organization of contents between the main sections and the appendix will help the reader reduce cross-section references. Some examples: 1. Lemma 2.1 can be put into appendix since it is not proposed by this work while the new theoretical analysis of Appendix 5.2 (or at least a summary) can be moved to the main sections 2. Figure 9 can be moved earlier to the main section since it well supports one of the contributions of the proposed method (using the DCRNN to capture the spatial correlation) D2: Some discussions regarding the comparison of this work to some state-of-the-arts graph embedding techniques using different deep neural network architectures would be a plus | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDiffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting\nSpatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large-scale road network traffic datasets and observe consistent improvement of 12% - 15% over state-of-the-art baselines",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary of the reviews: Pros: • A Novel Diffusion Convolutional Recurrent Neural Network framework for traffic forecasting • Apply bidirectional random walks with nice theoretical analysis to capture the spatial dependency • Novel applications of sequence to sequence architecture and the scheduled sampling technique into modeling the temporal dependency in the traffic domain • This work is very well written and easy to follow Cons: • Needs some minor re-organization of contents between the main sections and appendix Detailed comments: D1: Some minor re-organization of contents between the main sections and the appendix will help the reader reduce cross-section references. Some examples: 1. Lemma 2.1 can be put into appendix since it is not proposed by this work while the new theoretical analysis of Appendix 5.2 (or at least a summary) can be moved to the main sections 2. Figure 9 can be moved earlier to the main section since it well supports one of the contributions of the proposed method (using the DCRNN to capture the spatial correlation) D2: Some discussions regarding the comparison of this work to some state-of-the-arts graph embedding techniques using different deep neural network architectures would be a plus",
"role": "assistant"
}
]
| [
" Summary of the reviews: Pros: • A Novel Diffusion Convolutional Recurrent Neural Network framework for traffic forecasting • Apply bidirectional random walks with nice theoretical analysis to capture the spatial dependency • Novel applications of sequence to sequence architecture and the scheduled sampling technique into modeling the temporal dependency in the traffic domain • This work is very well written and easy to follow Cons: • Needs some minor re-organization of contents between the main sections and appendix Detailed comments: D1: Some minor re-organization of contents between the main sections and the appendix will help the reader reduce cross-section references.",
"Some examples: 1.",
"Lemma 2.1 can be put into appendix since it is not proposed by this work while the new theoretical analysis of Appendix 5.2 (or at least a summary) can be moved to the main sections 2.",
"Figure 9 can be moved earlier to the main section since it well supports one of the contributions of the proposed method (using the DCRNN to capture the spatial correlation) D2: Some discussions regarding the comparison of this work to some state-of-the-arts graph embedding techniques using different deep neural network architectures would be a plus"
] | {
"criticism": 0,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 4
} | 0 | 0.25 | 0 | 0.75 | 0.25 | 0.5 | 0.25 | 0.75 | 4 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.75 | 1.471623 | 1.278377 |
Zero-shot Cross Language Text Classification | Labeled text classification datasets are typically only available in a few select languages. In order to train a model for e.g news categorization in a language without a suitable text classification dataset there are two options. The first option is to create a new labeled dataset by hand, and the second option is to transfer label information from an existing labeled dataset in a source language to the target language . In this paper we propose a method for sharing label information across languages by means of a language independent text encoder. The encoder will give almost identical representations to multilingual versions of the same text. This means that labeled data in one language can be used to train a classifier that works for the rest of the languages. The encoder is trained independently of any concrete classification task and can therefore subsequently be used for any classification task. We show that it is possible to obtain good performance even in the case where only a comparable corpus of texts is available. | This paper addresses the problem of learning a cross-language text categorizer with no labelled information in the target language. The suggested solution relies on learning cross-lingual embeddings, and training a classifier using labelled data in the source language only. The idea of using cross-lingual or multilingual representations to seamlessly handle documents across languages is not terribly novel as it has been use in multilignual categorization or semantic similarity for some time. This contribution however proposes a clean separation of the multiligual encoder and classifier, as well as a good (but long) section on related prior art. One concern is that the modelling section stays fairly high level and is hardly sufficient, for example to re-implement the models. Many design decisions (e.g. #layers, #units) are not justified. They likely result from preliminary experiments, in that case it should be said. The main concern is that the experiments could be greatly improved. Given the extensive related work section, it is odd that no alternate model is compared to. The details on the experiments are also scarce. For example, are all accuracy results computed on the same 8k test set? If so this should be clearly stated. Why are models tested on small subsets of the available data? You have 493k Italian documents, yet the largest model uses 158k... It is unclear where many such decisions come from -- e.g. Fig 4b misses results for 1000 and 1250 dimensions and Fig 4b has nothing between 68k and 137k, precisely where a crossover happens. In short, it feels like the paper would greatly improve from a clearer modeling description and more careful experimental design. Misc: - Clarify early on what "samples" are in your categorization context. - Given the data set, why use a single-label multiclass setup, rather than multilabel? - Table 1 caption claims an average of 2.3 articles per topic, yet for 200 topics you have 500k to 1.5M articles? - Clarify the use of the first 200 words in each article vs. snippets - Put overall caption in Figs 2-4 on top of (a), (b), otherwise references like Fig 4b are unclear. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nZero-shot Cross Language Text Classification\nLabeled text classification datasets are typically only available in a few select languages. In order to train a model for e.g news categorization in a language without a suitable text classification dataset there are two options. The first option is to create a new labeled dataset by hand, and the second option is to transfer label information from an existing labeled dataset in a source language to the target language . In this paper we propose a method for sharing label information across languages by means of a language independent text encoder. The encoder will give almost identical representations to multilingual versions of the same text. This means that labeled data in one language can be used to train a classifier that works for the rest of the languages. The encoder is trained independently of any concrete classification task and can therefore subsequently be used for any classification task. We show that it is possible to obtain good performance even in the case where only a comparable corpus of texts is available.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper addresses the problem of learning a cross-language text categorizer with no labelled information in the target language. The suggested solution relies on learning cross-lingual embeddings, and training a classifier using labelled data in the source language only. The idea of using cross-lingual or multilingual representations to seamlessly handle documents across languages is not terribly novel as it has been use in multilignual categorization or semantic similarity for some time. This contribution however proposes a clean separation of the multiligual encoder and classifier, as well as a good (but long) section on related prior art. One concern is that the modelling section stays fairly high level and is hardly sufficient, for example to re-implement the models. Many design decisions (e.g. #layers, #units) are not justified. They likely result from preliminary experiments, in that case it should be said. The main concern is that the experiments could be greatly improved. Given the extensive related work section, it is odd that no alternate model is compared to. The details on the experiments are also scarce. For example, are all accuracy results computed on the same 8k test set? If so this should be clearly stated. Why are models tested on small subsets of the available data? You have 493k Italian documents, yet the largest model uses 158k... It is unclear where many such decisions come from -- e.g. Fig 4b misses results for 1000 and 1250 dimensions and Fig 4b has nothing between 68k and 137k, precisely where a crossover happens. In short, it feels like the paper would greatly improve from a clearer modeling description and more careful experimental design. Misc: - Clarify early on what \"samples\" are in your categorization context. - Given the data set, why use a single-label multiclass setup, rather than multilabel? - Table 1 caption claims an average of 2.3 articles per topic, yet for 200 topics you have 500k to 1.5M articles? - Clarify the use of the first 200 words in each article vs. snippets - Put overall caption in Figs 2-4 on top of (a), (b), otherwise references like Fig 4b are unclear.",
"role": "assistant"
}
]
| [
" This paper addresses the problem of learning a cross-language text categorizer with no labelled information in the target language.",
"The suggested solution relies on learning cross-lingual embeddings, and training a classifier using labelled data in the source language only.",
"The idea of using cross-lingual or multilingual representations to seamlessly handle documents across languages is not terribly novel as it has been use in multilignual categorization or semantic similarity for some time.",
"This contribution however proposes a clean separation of the multiligual encoder and classifier, as well as a good (but long) section on related prior art.",
"One concern is that the modelling section stays fairly high level and is hardly sufficient, for example to re-implement the models.",
"Many design decisions (e.g.",
"#layers, #units) are not justified.",
"They likely result from preliminary experiments, in that case it should be said.",
"The main concern is that the experiments could be greatly improved.",
"Given the extensive related work section, it is odd that no alternate model is compared to.",
"The details on the experiments are also scarce.",
"For example, are all accuracy results computed on the same 8k test set?",
"If so this should be clearly stated.",
"Why are models tested on small subsets of the available data?",
"You have 493k Italian documents, yet the largest model uses 158k...",
"It is unclear where many such decisions come from -- e.g.",
"Fig 4b misses results for 1000 and 1250 dimensions and Fig 4b has nothing between 68k and 137k, precisely where a crossover happens.",
"In short, it feels like the paper would greatly improve from a clearer modeling description and more careful experimental design.",
"Misc: - Clarify early on what \"samples\" are in your categorization context.",
"- Given the data set, why use a single-label multiclass setup, rather than multilabel?",
"- Table 1 caption claims an average of 2.3 articles per topic, yet for 200 topics you have 500k to 1.5M articles?",
"- Clarify the use of the first 200 words in each article vs. snippets - Put overall caption in Figs 2-4 on top of (a), (b), otherwise references like Fig 4b are unclear."
] | {
"criticism": 7,
"example": 5,
"importance_and_relevance": 2,
"materials_and_methods": 13,
"praise": 2,
"presentation_and_reporting": 11,
"results_and_discussion": 3,
"suggestion_and_solution": 6,
"total": 22
} | 0.318182 | 0.227273 | 0.090909 | 0.590909 | 0.090909 | 0.5 | 0.136364 | 0.272727 | 22 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.227273 | 0.948896 | 1.278377 |
Link Weight Prediction with Node Embeddings | Application of deep learning has been successful in various domains such as im- age recognition, speech recognition and natural language processing. However, the research on its application in graph mining is still in an early stage. Here we present the first generic deep learning approach to the graph link weight prediction problem based on node embeddings. We evaluate this approach with three differ- ent node embedding techniques experimentally and compare its performance with two state-of-the-art non deep learning baseline approaches. Our experiment re- sults suggest that this deep learning approach outperforms the baselines by up to 70% depending on the dataset and embedding technique applied. This approach shows that deep learning can be successfully applied to link weight prediction to improve prediction accuracy. | Although this paper aims at an interesting and important task, the reviewer does not feel it is ready to be published. Below are some detailed comments: Pros - Numerous public datasets are used for the experiments - Good introductions for some of the existing methods. Cons - The novelty is limited. The basic idea of the proposed method is to simply concatenate the embeddings of two nodes (via activation separately) from both side of edges, which is straightforward and produces only marginal improvement over existing methods (the comparison of Figure 1 and Figure 3 would suggest this fact). The optimization algorithm is not novel either. - Lack of detailed description and analysis for the proposed model S. In Section 5.2, only brief descriptions are given for the proposed approach. - The selected baseline methods are too weak as competitors, some important relevant methods are also missing in the comparisons. For the graph embedding learning task, one of the state-of-the-art approach is conducting Graph Convolutional Networks (GCNs), and GCNs seem to be able to tackle this problem as well. Moreover, the target task of this paper is mathematically identical to the rating prediction problem (if we treat the weight matrix of the graph as the rating matrix, and consider the nodes as users, for example), which can be loved by a classic collaborative filtering solution such as matrix factorization. The authors probably need to survey and compared against the proposed approach. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLink Weight Prediction with Node Embeddings\nApplication of deep learning has been successful in various domains such as im- age recognition, speech recognition and natural language processing. However, the research on its application in graph mining is still in an early stage. Here we present the first generic deep learning approach to the graph link weight prediction problem based on node embeddings. We evaluate this approach with three differ- ent node embedding techniques experimentally and compare its performance with two state-of-the-art non deep learning baseline approaches. Our experiment re- sults suggest that this deep learning approach outperforms the baselines by up to 70% depending on the dataset and embedding technique applied. This approach shows that deep learning can be successfully applied to link weight prediction to improve prediction accuracy.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Although this paper aims at an interesting and important task, the reviewer does not feel it is ready to be published. Below are some detailed comments: Pros - Numerous public datasets are used for the experiments - Good introductions for some of the existing methods. Cons - The novelty is limited. The basic idea of the proposed method is to simply concatenate the embeddings of two nodes (via activation separately) from both side of edges, which is straightforward and produces only marginal improvement over existing methods (the comparison of Figure 1 and Figure 3 would suggest this fact). The optimization algorithm is not novel either. - Lack of detailed description and analysis for the proposed model S. In Section 5.2, only brief descriptions are given for the proposed approach. - The selected baseline methods are too weak as competitors, some important relevant methods are also missing in the comparisons. For the graph embedding learning task, one of the state-of-the-art approach is conducting Graph Convolutional Networks (GCNs), and GCNs seem to be able to tackle this problem as well. Moreover, the target task of this paper is mathematically identical to the rating prediction problem (if we treat the weight matrix of the graph as the rating matrix, and consider the nodes as users, for example), which can be loved by a classic collaborative filtering solution such as matrix factorization. The authors probably need to survey and compared against the proposed approach.",
"role": "assistant"
}
]
| [
" Although this paper aims at an interesting and important task, the reviewer does not feel it is ready to be published.",
"Below are some detailed comments: Pros - Numerous public datasets are used for the experiments - Good introductions for some of the existing methods.",
"Cons - The novelty is limited.",
"The basic idea of the proposed method is to simply concatenate the embeddings of two nodes (via activation separately) from both side of edges, which is straightforward and produces only marginal improvement over existing methods (the comparison of Figure 1 and Figure 3 would suggest this fact).",
"The optimization algorithm is not novel either.",
"- Lack of detailed description and analysis for the proposed model S. In Section 5.2, only brief descriptions are given for the proposed approach.",
"- The selected baseline methods are too weak as competitors, some important relevant methods are also missing in the comparisons.",
"For the graph embedding learning task, one of the state-of-the-art approach is conducting Graph Convolutional Networks (GCNs), and GCNs seem to be able to tackle this problem as well.",
"Moreover, the target task of this paper is mathematically identical to the rating prediction problem (if we treat the weight matrix of the graph as the rating matrix, and consider the nodes as users, for example), which can be loved by a classic collaborative filtering solution such as matrix factorization.",
"The authors probably need to survey and compared against the proposed approach."
] | {
"criticism": 5,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 8,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 10
} | 0.5 | 0.1 | 0.2 | 0.8 | 0.2 | 0.1 | 0.2 | 0.2 | 10 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.3 | 2.157958 | 0.142042 |
N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning | While bigger and deeper neural network architectures continue to advance the state-of-the-art for many computer vision tasks, real-world adoption of these networks is impeded by hardware and speed constraints. Conventional model compression methods attempt to address this problem by modifying the architecture manually or using pre-defined heuristics. Since the space of all reduced architectures is very large, modifying the architecture of a deep neural network in this way is a difficult task. In this paper, we tackle this issue by introducing a principled method for learning reduced network architectures in a data-driven way using reinforcement learning. Our approach takes a larger *teacher* network as input and outputs a compressed *student* network derived from the *teacher* network. In the first stage of our method, a recurrent policy network aggressively removes layers from the large *teacher* model. In the second stage, another recurrent policy network carefully reduces the size of each remaining layer. The resulting network is then evaluated to obtain a reward -- a score based on the accuracy and compression of the network. Our approach uses this reward signal with policy gradients to train the policies to find a locally optimal student network. Our experiments show that we can achieve compression rates of more than 10x for models such as ResNet-34 while maintaining similar performance to the input *teacher* network. We also present a valuable transfer learning result which shows that policies which are pre-trained on smaller *teacher* networks can be used to rapidly speed up training on larger *teacher* networks. | On the positive side the paper is well written and the problem is interesting. On the negative side there is very limited innovation in the techniques proposed, that are indeed small variations of existing methods. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nN2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning\nWhile bigger and deeper neural network architectures continue to advance the state-of-the-art for many computer vision tasks, real-world adoption of these networks is impeded by hardware and speed constraints. Conventional model compression methods attempt to address this problem by modifying the architecture manually or using pre-defined heuristics. Since the space of all reduced architectures is very large, modifying the architecture of a deep neural network in this way is a difficult task. In this paper, we tackle this issue by introducing a principled method for learning reduced network architectures in a data-driven way using reinforcement learning. Our approach takes a larger *teacher* network as input and outputs a compressed *student* network derived from the *teacher* network. In the first stage of our method, a recurrent policy network aggressively removes layers from the large *teacher* model. In the second stage, another recurrent policy network carefully reduces the size of each remaining layer. The resulting network is then evaluated to obtain a reward -- a score based on the accuracy and compression of the network. Our approach uses this reward signal with policy gradients to train the policies to find a locally optimal student network. Our experiments show that we can achieve compression rates of more than 10x for models such as ResNet-34 while maintaining similar performance to the input *teacher* network. We also present a valuable transfer learning result which shows that policies which are pre-trained on smaller *teacher* networks can be used to rapidly speed up training on larger *teacher* networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n On the positive side the paper is well written and the problem is interesting. On the negative side there is very limited innovation in the techniques proposed, that are indeed small variations of existing methods.",
"role": "assistant"
}
]
| [
" On the positive side the paper is well written and the problem is interesting.",
"On the negative side there is very limited innovation in the techniques proposed, that are indeed small variations of existing methods."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 1,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 2
} | 0.5 | 0 | 1 | 0.5 | 0.5 | 0.5 | 0 | 0 | 2 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 3 | 1.090326 | 1.909674 |
SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data | Neural networks exhibit good generalization behavior in the over-parameterized regime, where the number of network parameters exceeds the number of observations. Nonetheless, current generalization bounds for neural networks fail to explain this phenomenon. In an attempt to bridge this gap, we study the problem of learning a two-layer over-parameterized neural network, when the data is generated by a linearly separable function. In the case where the network has Leaky ReLU activations, we provide both optimization and generalization guarantees for over-parameterized networks. Specifically, we prove convergence rates of SGD to a global minimum and provide generalization guarantees for this global minimum that are independent of the network size. Therefore, our result clearly shows that the use of SGD for optimization both finds a global minimum, and avoids overfitting despite the high capacity of the model. This is the first theoretical demonstration that SGD can avoid overfitting, when learning over-specified neural network classifiers. | Paper studies an interesting phenomenon of overparameterised models being able to learn well-generalising solutions. It focuses on a setting with three crucial simplifications: - data is linearly separable - model is 1-hidden layer feed forward network with homogenous activations - **only input-hidden layer weights** are trained, while the hidden-output layer*s weights are fixed to be (v, v, v, ..., v, -v, -v, -v, ..., -v) (in particular -- (1,1,...,1,-1,-1,...,-1)) While the last assumption does not limit the expressiveness of the model in any way, as homogenous activations have the property of f(ax)=af(x) (for positive a) and so for any unconstrained model in the second layer, we can "propagate" its weights back into first layer and obtain functionally equivalent network. However, learning dynamics of a model of form z(x) = SUM( g(Wx+b) ) - SUM( g(Vx+c) ) + d and "standard" neural model z(x) = Vg(Wx+b)+c can be completely different. Consequently, while the results are very interesting, claiming their applicability to the deep models is (at this point) far fetched. In particular, abstract suggests no simplifications are being made, which does not correspond to actual result in the paper. The results themselves are interesting, but due to the above restriction it is not clear whether it sheds any light on neural nets, or simply described a behaviour of very specific, non-standard shallow model. I am happy to revisit my current rating given authors rephrase the paper so that the simplifications being made are clear both in abstract and in the text, and that (at least empirically) it does not affect learning in practice. In other words - all the experiments in the paper follow the assumption made, if authors claim is that the restriction introduced does not matter, but make proofs too technical - at least experimental section should show this. If the claims do not hold empirically without the assumptions made, then the assumptions are not realistic and cannot be used for explaining the behaviour of models we are interested in. Pros: - tackling a hard problem of overparametrised models, without introducing common unrealistic assumptions of activations independence - very nice result of "phase change" dependend on the size of hidden layer in section 7 Cons: - simplification with non-trainable second layer is currently not well studied in the paper; and while not affecting expressive power - it is something that can change learning dynamics completely # After the update Authors addressed my concerns by: - making simplification assumption clearer in the text - adding empirical evaluation without the assumption - weakening the assumptions I find these modifications satisfactory and rating has been updated accordingly. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data\nNeural networks exhibit good generalization behavior in the over-parameterized regime, where the number of network parameters exceeds the number of observations. Nonetheless, current generalization bounds for neural networks fail to explain this phenomenon. In an attempt to bridge this gap, we study the problem of learning a two-layer over-parameterized neural network, when the data is generated by a linearly separable function. In the case where the network has Leaky ReLU activations, we provide both optimization and generalization guarantees for over-parameterized networks. Specifically, we prove convergence rates of SGD to a global minimum and provide generalization guarantees for this global minimum that are independent of the network size. Therefore, our result clearly shows that the use of SGD for optimization both finds a global minimum, and avoids overfitting despite the high capacity of the model. This is the first theoretical demonstration that SGD can avoid overfitting, when learning over-specified neural network classifiers.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Paper studies an interesting phenomenon of overparameterised models being able to learn well-generalising solutions. It focuses on a setting with three crucial simplifications: - data is linearly separable - model is 1-hidden layer feed forward network with homogenous activations - **only input-hidden layer weights** are trained, while the hidden-output layer*s weights are fixed to be (v, v, v, ..., v, -v, -v, -v, ..., -v) (in particular -- (1,1,...,1,-1,-1,...,-1)) While the last assumption does not limit the expressiveness of the model in any way, as homogenous activations have the property of f(ax)=af(x) (for positive a) and so for any unconstrained model in the second layer, we can \"propagate\" its weights back into first layer and obtain functionally equivalent network. However, learning dynamics of a model of form z(x) = SUM( g(Wx+b) ) - SUM( g(Vx+c) ) + d and \"standard\" neural model z(x) = Vg(Wx+b)+c can be completely different. Consequently, while the results are very interesting, claiming their applicability to the deep models is (at this point) far fetched. In particular, abstract suggests no simplifications are being made, which does not correspond to actual result in the paper. The results themselves are interesting, but due to the above restriction it is not clear whether it sheds any light on neural nets, or simply described a behaviour of very specific, non-standard shallow model. I am happy to revisit my current rating given authors rephrase the paper so that the simplifications being made are clear both in abstract and in the text, and that (at least empirically) it does not affect learning in practice. In other words - all the experiments in the paper follow the assumption made, if authors claim is that the restriction introduced does not matter, but make proofs too technical - at least experimental section should show this. If the claims do not hold empirically without the assumptions made, then the assumptions are not realistic and cannot be used for explaining the behaviour of models we are interested in. Pros: - tackling a hard problem of overparametrised models, without introducing common unrealistic assumptions of activations independence - very nice result of \"phase change\" dependend on the size of hidden layer in section 7 Cons: - simplification with non-trainable second layer is currently not well studied in the paper; and while not affecting expressive power - it is something that can change learning dynamics completely # After the update Authors addressed my concerns by: - making simplification assumption clearer in the text - adding empirical evaluation without the assumption - weakening the assumptions I find these modifications satisfactory and rating has been updated accordingly.",
"role": "assistant"
}
]
| [
" Paper studies an interesting phenomenon of overparameterised models being able to learn well-generalising solutions.",
"It focuses on a setting with three crucial simplifications: - data is linearly separable - model is 1-hidden layer feed forward network with homogenous activations - **only input-hidden layer weights** are trained, while the hidden-output layer*s weights are fixed to be (v, v, v, ..., v, -v, -v, -v, ..., -v) (in particular -- (1,1,...,1,-1,-1,...,-1)) While the last assumption does not limit the expressiveness of the model in any way, as homogenous activations have the property of f(ax)=af(x) (for positive a) and so for any unconstrained model in the second layer, we can \"propagate\" its weights back into first layer and obtain functionally equivalent network.",
"However, learning dynamics of a model of form z(x) = SUM( g(Wx+b) ) - SUM( g(Vx+c) ) + d and \"standard\" neural model z(x) = Vg(Wx+b)+c can be completely different.",
"Consequently, while the results are very interesting, claiming their applicability to the deep models is (at this point) far fetched.",
"In particular, abstract suggests no simplifications are being made, which does not correspond to actual result in the paper.",
"The results themselves are interesting, but due to the above restriction it is not clear whether it sheds any light on neural nets, or simply described a behaviour of very specific, non-standard shallow model.",
"I am happy to revisit my current rating given authors rephrase the paper so that the simplifications being made are clear both in abstract and in the text, and that (at least empirically) it does not affect learning in practice.",
"In other words - all the experiments in the paper follow the assumption made, if authors claim is that the restriction introduced does not matter, but make proofs too technical - at least experimental section should show this.",
"If the claims do not hold empirically without the assumptions made, then the assumptions are not realistic and cannot be used for explaining the behaviour of models we are interested in.",
"Pros: - tackling a hard problem of overparametrised models, without introducing common unrealistic assumptions of activations independence - very nice result of \"phase change\" dependend on the size of hidden layer in section 7 Cons: - simplification with non-trainable second layer is currently not well studied in the paper; and while not affecting expressive power - it is something that can change learning dynamics completely # After the update Authors addressed my concerns by: - making simplification assumption clearer in the text - adding empirical evaluation without the assumption - weakening the assumptions I find these modifications satisfactory and rating has been updated accordingly."
] | {
"criticism": 6,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 8,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 5,
"suggestion_and_solution": 1,
"total": 10
} | 0.6 | 0 | 0.3 | 0.8 | 0.4 | 0.3 | 0.5 | 0.1 | 10 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3 | 2.857958 | 0.142042 |
THE EFFECTIVENESS OF A TWO-LAYER NEURAL NETWORK FOR RECOMMENDATIONS | We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books, Mobile Apps, Video and Music. It produces recommendations based on customer’s implicit feedback history such as purchases, listens or watches. Our key contribution is to formulate recommendation problem as a model that encodes historical behavior to predict the future behavior using soft data split, combining predictor and auto-encoder models. We introduce convolutional layer for learning the importance (time decay) of the purchases depending on their purchase date and demonstrate that the shape of the time decay function can be well approximated by a parametrical function. We present offline experimental results showing that neural networks with two hidden layers can capture seasonality changes, and at the same time outperform other modeling techniques, including our recommender in production. Most importantly, we demonstrate that our model can be scaled to all digital categories, and we observe significant improvements in an online A/B test. We also discuss key enhancements to the neural network model and describe our production pipeline. Finally we open-sourced our deep learning library which supports multi-gpu model parallel training. This is an important feature in building neural network based recommenders with large dimensionality of input and output data. | The paper proposes a new neural network based method for recommendation. The main finding of the paper is that a relatively simple method works for recommendation, compared to other methods based on neural networks that have been recently proposed. This contribution is not bad for an empirical paper. There*s certainly not that much here that*s groundbreaking methodologically, though it*s certainly nice to know that a simple and scalable method works. There*s not much detail about the data (it is after all an industrial paper). It would certainly be helpful to know how well the proposed method performs on a few standard recommender systems benchmark datasets (compared to the same baselines), in order to get a sense as to whether the improvement is actually due to having a better model, versus being due to some unique attributes of this particular industrial dataset under consideration. As it is, I am a little concerned that this may be a method that happens to work well for the types of data the authors are considering but may not work elsewhere. Other than that, it*s nice to see an evaluation on real production data, and it*s nice that the authors have provided enough info that the method should be (more or less) reproducible. There*s some slight concern that maybe this paper would be better for the industry track of some conference, given that it*s focused on an empirical evaluation rather than really making much of a methodological contribution. Again, this could be somewhat alleviated by evaluating on some standard and reproducible benchmarks. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTHE EFFECTIVENESS OF A TWO-LAYER NEURAL NETWORK FOR RECOMMENDATIONS\nWe present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books, Mobile Apps, Video and Music. It produces recommendations based on customer’s implicit feedback history such as purchases, listens or watches. Our key contribution is to formulate recommendation problem as a model that encodes historical behavior to predict the future behavior using soft data split, combining predictor and auto-encoder models. We introduce convolutional layer for learning the importance (time decay) of the purchases depending on their purchase date and demonstrate that the shape of the time decay function can be well approximated by a parametrical function. We present offline experimental results showing that neural networks with two hidden layers can capture seasonality changes, and at the same time outperform other modeling techniques, including our recommender in production. Most importantly, we demonstrate that our model can be scaled to all digital categories, and we observe significant improvements in an online A/B test. We also discuss key enhancements to the neural network model and describe our production pipeline. Finally we open-sourced our deep learning library which supports multi-gpu model parallel training. This is an important feature in building neural network based recommenders with large dimensionality of input and output data.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes a new neural network based method for recommendation. The main finding of the paper is that a relatively simple method works for recommendation, compared to other methods based on neural networks that have been recently proposed. This contribution is not bad for an empirical paper. There*s certainly not that much here that*s groundbreaking methodologically, though it*s certainly nice to know that a simple and scalable method works. There*s not much detail about the data (it is after all an industrial paper). It would certainly be helpful to know how well the proposed method performs on a few standard recommender systems benchmark datasets (compared to the same baselines), in order to get a sense as to whether the improvement is actually due to having a better model, versus being due to some unique attributes of this particular industrial dataset under consideration. As it is, I am a little concerned that this may be a method that happens to work well for the types of data the authors are considering but may not work elsewhere. Other than that, it*s nice to see an evaluation on real production data, and it*s nice that the authors have provided enough info that the method should be (more or less) reproducible. There*s some slight concern that maybe this paper would be better for the industry track of some conference, given that it*s focused on an empirical evaluation rather than really making much of a methodological contribution. Again, this could be somewhat alleviated by evaluating on some standard and reproducible benchmarks.",
"role": "assistant"
}
]
| [
" The paper proposes a new neural network based method for recommendation.",
"The main finding of the paper is that a relatively simple method works for recommendation, compared to other methods based on neural networks that have been recently proposed.",
"This contribution is not bad for an empirical paper.",
"There*s certainly not that much here that*s groundbreaking methodologically, though it*s certainly nice to know that a simple and scalable method works.",
"There*s not much detail about the data (it is after all an industrial paper).",
"It would certainly be helpful to know how well the proposed method performs on a few standard recommender systems benchmark datasets (compared to the same baselines), in order to get a sense as to whether the improvement is actually due to having a better model, versus being due to some unique attributes of this particular industrial dataset under consideration.",
"As it is, I am a little concerned that this may be a method that happens to work well for the types of data the authors are considering but may not work elsewhere.",
"Other than that, it*s nice to see an evaluation on real production data, and it*s nice that the authors have provided enough info that the method should be (more or less) reproducible.",
"There*s some slight concern that maybe this paper would be better for the industry track of some conference, given that it*s focused on an empirical evaluation rather than really making much of a methodological contribution.",
"Again, this could be somewhat alleviated by evaluating on some standard and reproducible benchmarks."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 9,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 4,
"total": 10
} | 0.3 | 0 | 0.4 | 0.9 | 0.2 | 0.1 | 0.1 | 0.4 | 10 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.4 | 2.257958 | 0.142042 |
Latent forward model for Real-time Strategy game planning with incomplete information | Model-free deep reinforcement learning approaches have shown superhuman performance in simulated environments (e.g., Atari games, Go, etc). During training, these approaches often implicitly construct a latent space that contains key information for decision making. In this paper, we learn a forward model on this latent space and apply it to model-based planning in miniature Real-time Strategy game with incomplete information (MiniRTS). We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models. We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search (MCTS), in terms of win rates against rule-based agents. | Summary: This paper proposes to use the latent representations learned by a model-free RL agent to learn a transition model for use in model-based RL (specifically MCTS). The paper introduces a strong model-free baseline (win rate ~80% in the MiniRTS environment) and shows that the latent space learned by this baseline does include relevant game information. They use the latent state representation to learn a model for planning, which performs slightly better than a random baseline (win rate ~25%). Pros: - Improvement of the model-free method from previous work by incorporating information about previously observed states, demonstrating the importance of memory. - Interesting evaluation of which input features are important for the model-free algorithm, such as base HP ratio and the amount of resources available. Cons: - The model-based approach is disappointing compared to the model-free approach. Quality and Clarity: The paper in general is well-written and easy to follow and seems technically correct, though I found some of the figures and definitions confusing, specifically: - The terms for different forward models are not defined (e.g. MatchPi, MatchA, etc.). I can infer what they mean based on Figure 1 but it would be helpful to readers to define them explicitly. - In Figure 3b, it is not clear to me what the difference between the red and blue curves is. - In Figure 4, it would be helpful to label which color corresponds to the agent and which to the rule-based AI. - The caption in Figure 8 is malformatted. - In Figure 7, the baseline of .hat{h_t}=h_{t-2} seems strange---I would find it more useful for Figure 7 to compare to the performance if the model were not used (i.e. if .hat{h_t}=h_t) to see how much performance suffers as a result of model error. Originality: I am unfamiliar with the MiniRTS environment, but given that it is only published in this year*s NIPS (and that I couldn*t find any other papers about it on Google Scholar) it seems that this is the first paper to compare model-free and model-based approaches in this domain. However, the model-free approach does not seem particularly novel in that it is just an extension of that from Tian et al. (2017) plus some additional features. The idea of learning a model based on the features from a model-free agent seems novel but lacks significance in that the results are not very compelling (see below). Significance: I feel the paper overstates the results in saying that the learned forward model is usable in MCTS. The implication in the abstract and introduction (at least as I interpreted it) is that the learned model would outperform a model-free method, but upon reading the rest of the paper I was disappointed to learn that in reality it drastically underperforms. The baseline used in the paper is a random baseline, which seems a bit unfair---a good baseline is usually an algorithm that is an obvious first choice, such as the model-free approach. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLatent forward model for Real-time Strategy game planning with incomplete information\nModel-free deep reinforcement learning approaches have shown superhuman performance in simulated environments (e.g., Atari games, Go, etc). During training, these approaches often implicitly construct a latent space that contains key information for decision making. In this paper, we learn a forward model on this latent space and apply it to model-based planning in miniature Real-time Strategy game with incomplete information (MiniRTS). We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models. We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search (MCTS), in terms of win rates against rule-based agents.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: This paper proposes to use the latent representations learned by a model-free RL agent to learn a transition model for use in model-based RL (specifically MCTS). The paper introduces a strong model-free baseline (win rate ~80% in the MiniRTS environment) and shows that the latent space learned by this baseline does include relevant game information. They use the latent state representation to learn a model for planning, which performs slightly better than a random baseline (win rate ~25%). Pros: - Improvement of the model-free method from previous work by incorporating information about previously observed states, demonstrating the importance of memory. - Interesting evaluation of which input features are important for the model-free algorithm, such as base HP ratio and the amount of resources available. Cons: - The model-based approach is disappointing compared to the model-free approach. Quality and Clarity: The paper in general is well-written and easy to follow and seems technically correct, though I found some of the figures and definitions confusing, specifically: - The terms for different forward models are not defined (e.g. MatchPi, MatchA, etc.). I can infer what they mean based on Figure 1 but it would be helpful to readers to define them explicitly. - In Figure 3b, it is not clear to me what the difference between the red and blue curves is. - In Figure 4, it would be helpful to label which color corresponds to the agent and which to the rule-based AI. - The caption in Figure 8 is malformatted. - In Figure 7, the baseline of .hat{h_t}=h_{t-2} seems strange---I would find it more useful for Figure 7 to compare to the performance if the model were not used (i.e. if .hat{h_t}=h_t) to see how much performance suffers as a result of model error. Originality: I am unfamiliar with the MiniRTS environment, but given that it is only published in this year*s NIPS (and that I couldn*t find any other papers about it on Google Scholar) it seems that this is the first paper to compare model-free and model-based approaches in this domain. However, the model-free approach does not seem particularly novel in that it is just an extension of that from Tian et al. (2017) plus some additional features. The idea of learning a model based on the features from a model-free agent seems novel but lacks significance in that the results are not very compelling (see below). Significance: I feel the paper overstates the results in saying that the learned forward model is usable in MCTS. The implication in the abstract and introduction (at least as I interpreted it) is that the learned model would outperform a model-free method, but upon reading the rest of the paper I was disappointed to learn that in reality it drastically underperforms. The baseline used in the paper is a random baseline, which seems a bit unfair---a good baseline is usually an algorithm that is an obvious first choice, such as the model-free approach.",
"role": "assistant"
}
]
| [
" Summary: This paper proposes to use the latent representations learned by a model-free RL agent to learn a transition model for use in model-based RL (specifically MCTS).",
"The paper introduces a strong model-free baseline (win rate ~80% in the MiniRTS environment) and shows that the latent space learned by this baseline does include relevant game information.",
"They use the latent state representation to learn a model for planning, which performs slightly better than a random baseline (win rate ~25%).",
"Pros: - Improvement of the model-free method from previous work by incorporating information about previously observed states, demonstrating the importance of memory.",
"- Interesting evaluation of which input features are important for the model-free algorithm, such as base HP ratio and the amount of resources available.",
"Cons: - The model-based approach is disappointing compared to the model-free approach.",
"Quality and Clarity: The paper in general is well-written and easy to follow and seems technically correct, though I found some of the figures and definitions confusing, specifically: - The terms for different forward models are not defined (e.g.",
"MatchPi, MatchA, etc.).",
"I can infer what they mean based on Figure 1 but it would be helpful to readers to define them explicitly.",
"- In Figure 3b, it is not clear to me what the difference between the red and blue curves is.",
"- In Figure 4, it would be helpful to label which color corresponds to the agent and which to the rule-based AI.",
"- The caption in Figure 8 is malformatted.",
"- In Figure 7, the baseline of .hat{h_t}=h_{t-2} seems strange---I would find it more useful for Figure 7 to compare to the performance if the model were not used (i.e.",
"if .hat{h_t}=h_t) to see how much performance suffers as a result of model error.",
"Originality: I am unfamiliar with the MiniRTS environment, but given that it is only published in this year*s NIPS (and that I couldn*t find any other papers about it on Google Scholar) it seems that this is the first paper to compare model-free and model-based approaches in this domain.",
"However, the model-free approach does not seem particularly novel in that it is just an extension of that from Tian et al.",
"(2017) plus some additional features.",
"The idea of learning a model based on the features from a model-free agent seems novel but lacks significance in that the results are not very compelling (see below).",
"Significance: I feel the paper overstates the results in saying that the learned forward model is usable in MCTS.",
"The implication in the abstract and introduction (at least as I interpreted it) is that the learned model would outperform a model-free method, but upon reading the rest of the paper I was disappointed to learn that in reality it drastically underperforms.",
"The baseline used in the paper is a random baseline, which seems a bit unfair---a good baseline is usually an algorithm that is an obvious first choice, such as the model-free approach."
] | {
"criticism": 9,
"example": 5,
"importance_and_relevance": 3,
"materials_and_methods": 16,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 3,
"suggestion_and_solution": 5,
"total": 21
} | 0.428571 | 0.238095 | 0.142857 | 0.761905 | 0.142857 | 0.238095 | 0.142857 | 0.238095 | 21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 1.323258 | 1.010076 |
Quadrature-based features for kernel approximation | We consider the problem of improving kernel approximation via feature maps. These maps arise as Monte Carlo approximation to integral representations of kernel functions and scale up kernel methods for larger datasets. We propose to use more efficient numerical integration technique to obtain better estimates of the integrals compared to the state-of-the-art methods. Our approach allows to use information about the integrand to enhance approximation and facilitates fast computations. We derive the convergence behavior and conduct an extensive empirical study that supports our hypothesis. | The authors offer a novel version of the random feature map approach to approximately solving large-scale kernel problems: each feature map evaluates the "fourier feature" corresponding to the kernel at a set of randomly sampled quadrature points. This gives an unbiased kernel estimator; they prove a bound its variance and provide experiment evidence that for Gaussian and arc-cos kernels, their suggested qaudrature rule outperforms previous random feature maps in terms of kernel approximation error and in terms of downstream classification and regression tasks. The idea is straightforward, the analysis seems correct, and the experiments suggest the method has superior accuracy compared to prior RFMs for shift-invariant kernels. The work is original, but I would say incremental, and the relevant literature is cited. The method seems to give significantly lower kernel approximation errors, but the significance of the performance difference in downstream ML tasks is unclear --- the confidence intervals of the different methods overlap sufficiently to make it questionable whether the relative complexity of this method is worth the effort. Since good performance on downstream tasks is the crucial feature that we want RFMs to have, it is not clear that this method represents a true improvement over the state-of-the-art. The exposition of the quadrature method is difficult to follow, and the connection between the quadrature rules and the random feature map is never explicitly stated: e.g. equation 6 says how the kernel function is approximated as an integral, but does not give the feature map that an ML practitioner should use to get that approximate integral. It would have been a good idea to include figures showing the time-accuracy tradeoff of the various methods, which is more important in large-scale ML applications than the kernel approximation error. It is not clear that the method is *not* more expensive in practice than previous methods (Table 1 gives superior asymptotic runtimes, but I would like to see actual run times, as evaluating the feature maps sound relatively complicated compared to other RFMs). On a related note, I would also like to have seen this method applied to kernels where the probability density in the Bochner integral was not the Gaussian density (e.g., the Laplacian kernel): the authors suggested that their method works there as well when one uses a Gaussian approximation of the density (which is not clear to me), --- and it may be the case that sampling from their quadrature distribution is faster than sampling from the original non-Gaussian density. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQuadrature-based features for kernel approximation\nWe consider the problem of improving kernel approximation via feature maps. These maps arise as Monte Carlo approximation to integral representations of kernel functions and scale up kernel methods for larger datasets. We propose to use more efficient numerical integration technique to obtain better estimates of the integrals compared to the state-of-the-art methods. Our approach allows to use information about the integrand to enhance approximation and facilitates fast computations. We derive the convergence behavior and conduct an extensive empirical study that supports our hypothesis.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors offer a novel version of the random feature map approach to approximately solving large-scale kernel problems: each feature map evaluates the \"fourier feature\" corresponding to the kernel at a set of randomly sampled quadrature points. This gives an unbiased kernel estimator; they prove a bound its variance and provide experiment evidence that for Gaussian and arc-cos kernels, their suggested qaudrature rule outperforms previous random feature maps in terms of kernel approximation error and in terms of downstream classification and regression tasks. The idea is straightforward, the analysis seems correct, and the experiments suggest the method has superior accuracy compared to prior RFMs for shift-invariant kernels. The work is original, but I would say incremental, and the relevant literature is cited. The method seems to give significantly lower kernel approximation errors, but the significance of the performance difference in downstream ML tasks is unclear --- the confidence intervals of the different methods overlap sufficiently to make it questionable whether the relative complexity of this method is worth the effort. Since good performance on downstream tasks is the crucial feature that we want RFMs to have, it is not clear that this method represents a true improvement over the state-of-the-art. The exposition of the quadrature method is difficult to follow, and the connection between the quadrature rules and the random feature map is never explicitly stated: e.g. equation 6 says how the kernel function is approximated as an integral, but does not give the feature map that an ML practitioner should use to get that approximate integral. It would have been a good idea to include figures showing the time-accuracy tradeoff of the various methods, which is more important in large-scale ML applications than the kernel approximation error. It is not clear that the method is *not* more expensive in practice than previous methods (Table 1 gives superior asymptotic runtimes, but I would like to see actual run times, as evaluating the feature maps sound relatively complicated compared to other RFMs). On a related note, I would also like to have seen this method applied to kernels where the probability density in the Bochner integral was not the Gaussian density (e.g., the Laplacian kernel): the authors suggested that their method works there as well when one uses a Gaussian approximation of the density (which is not clear to me), --- and it may be the case that sampling from their quadrature distribution is faster than sampling from the original non-Gaussian density.",
"role": "assistant"
}
]
| [
" The authors offer a novel version of the random feature map approach to approximately solving large-scale kernel problems: each feature map evaluates the \"fourier feature\" corresponding to the kernel at a set of randomly sampled quadrature points.",
"This gives an unbiased kernel estimator; they prove a bound its variance and provide experiment evidence that for Gaussian and arc-cos kernels, their suggested qaudrature rule outperforms previous random feature maps in terms of kernel approximation error and in terms of downstream classification and regression tasks.",
"The idea is straightforward, the analysis seems correct, and the experiments suggest the method has superior accuracy compared to prior RFMs for shift-invariant kernels.",
"The work is original, but I would say incremental, and the relevant literature is cited.",
"The method seems to give significantly lower kernel approximation errors, but the significance of the performance difference in downstream ML tasks is unclear --- the confidence intervals of the different methods overlap sufficiently to make it questionable whether the relative complexity of this method is worth the effort.",
"Since good performance on downstream tasks is the crucial feature that we want RFMs to have, it is not clear that this method represents a true improvement over the state-of-the-art.",
"The exposition of the quadrature method is difficult to follow, and the connection between the quadrature rules and the random feature map is never explicitly stated: e.g.",
"equation 6 says how the kernel function is approximated as an integral, but does not give the feature map that an ML practitioner should use to get that approximate integral.",
"It would have been a good idea to include figures showing the time-accuracy tradeoff of the various methods, which is more important in large-scale ML applications than the kernel approximation error.",
"It is not clear that the method is *not* more expensive in practice than previous methods (Table 1 gives superior asymptotic runtimes, but I would like to see actual run times, as evaluating the feature maps sound relatively complicated compared to other RFMs).",
"On a related note, I would also like to have seen this method applied to kernels where the probability density in the Bochner integral was not the Gaussian density (e.g., the Laplacian kernel): the authors suggested that their method works there as well when one uses a Gaussian approximation of the density (which is not clear to me), --- and it may be the case that sampling from their quadrature distribution is faster than sampling from the original non-Gaussian density."
] | {
"criticism": 6,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 10,
"praise": 4,
"presentation_and_reporting": 2,
"results_and_discussion": 6,
"suggestion_and_solution": 4,
"total": 11
} | 0.545455 | 0 | 0.181818 | 0.909091 | 0.363636 | 0.181818 | 0.545455 | 0.363636 | 11 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3.090909 | 3.027779 | 0.06313 |
Enhance Word Representation for Out-of-Vocabulary on Ubuntu Dialogue Corpus | Ubuntu dialogue corpus is the largest public available dialogue corpus to make it feasible to build end-to-end deep neural network models directly from the conversation data. One challenge of Ubuntu dialogue corpus is the large number of out-of-vocabulary words. In this paper we proposed an algorithm which combines the general pre-trained word embedding vectors with those generated on the task-specific training set to address this issue. We integrated character embedding into Chen et al*s Enhanced LSTM method (ESIM) and used it to evaluate the effectiveness of our proposed method. For the task of next utterance selection, the proposed method has demonstrated a significant performance improvement against original ESIM and the new model has achieved state-of-the-art results on both Ubuntu dialogue corpus and Douban conversation corpus. In addition, we investigated the performance impact of end-of-utterance and end-of-turn token tags. | The paper considers a setting (Ubuntu Dialogue Corpus and Douban Conversation Corpus) where most word types in the data are not covered by pretrained representations. The proposed solution is to combine (1) external pretrained word embeddings and (2) pretrained word embeddings on the training data by keeping them as two views: use the view if it*s available, otherwise use a zero vector. This scheme is shown to perform well compared to other methods, specifically combinations of pretraining vs not pretraining embeddings on the training data, updating vs not updating embeddings during training, and others. Quality: Low. The research is not very well modularized: the addressed problem has nothing specifically to do with ESIM and dialogue response classification, but it*s all tangled up. The proposed solution is reasonable but rather minor. Given that the model will learn task-specific word representations on the training set anyway, it*s not clear how important it is to follow this procedure, though minor improvement is reported (Table 5). Clarity: The writing is clear. But the point of the paper is not immediately obvious because of its failure to modularize its contributions (see above). Originality: Low to minor. Significance: It*s not convincing that an incremental improvement in the pretraining phase is so significant, for instance compared to developing a novel better architecture actually tailored to the dialogue task. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnhance Word Representation for Out-of-Vocabulary on Ubuntu Dialogue Corpus\nUbuntu dialogue corpus is the largest public available dialogue corpus to make it feasible to build end-to-end deep neural network models directly from the conversation data. One challenge of Ubuntu dialogue corpus is the large number of out-of-vocabulary words. In this paper we proposed an algorithm which combines the general pre-trained word embedding vectors with those generated on the task-specific training set to address this issue. We integrated character embedding into Chen et al*s Enhanced LSTM method (ESIM) and used it to evaluate the effectiveness of our proposed method. For the task of next utterance selection, the proposed method has demonstrated a significant performance improvement against original ESIM and the new model has achieved state-of-the-art results on both Ubuntu dialogue corpus and Douban conversation corpus. In addition, we investigated the performance impact of end-of-utterance and end-of-turn token tags.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper considers a setting (Ubuntu Dialogue Corpus and Douban Conversation Corpus) where most word types in the data are not covered by pretrained representations. The proposed solution is to combine (1) external pretrained word embeddings and (2) pretrained word embeddings on the training data by keeping them as two views: use the view if it*s available, otherwise use a zero vector. This scheme is shown to perform well compared to other methods, specifically combinations of pretraining vs not pretraining embeddings on the training data, updating vs not updating embeddings during training, and others. Quality: Low. The research is not very well modularized: the addressed problem has nothing specifically to do with ESIM and dialogue response classification, but it*s all tangled up. The proposed solution is reasonable but rather minor. Given that the model will learn task-specific word representations on the training set anyway, it*s not clear how important it is to follow this procedure, though minor improvement is reported (Table 5). Clarity: The writing is clear. But the point of the paper is not immediately obvious because of its failure to modularize its contributions (see above). Originality: Low to minor. Significance: It*s not convincing that an incremental improvement in the pretraining phase is so significant, for instance compared to developing a novel better architecture actually tailored to the dialogue task.",
"role": "assistant"
}
]
| [
" The paper considers a setting (Ubuntu Dialogue Corpus and Douban Conversation Corpus) where most word types in the data are not covered by pretrained representations.",
"The proposed solution is to combine (1) external pretrained word embeddings and (2) pretrained word embeddings on the training data by keeping them as two views: use the view if it*s available, otherwise use a zero vector.",
"This scheme is shown to perform well compared to other methods, specifically combinations of pretraining vs not pretraining embeddings on the training data, updating vs not updating embeddings during training, and others.",
"Quality: Low.",
"The research is not very well modularized: the addressed problem has nothing specifically to do with ESIM and dialogue response classification, but it*s all tangled up.",
"The proposed solution is reasonable but rather minor.",
"Given that the model will learn task-specific word representations on the training set anyway, it*s not clear how important it is to follow this procedure, though minor improvement is reported (Table 5).",
"Clarity: The writing is clear.",
"But the point of the paper is not immediately obvious because of its failure to modularize its contributions (see above).",
"Originality: Low to minor.",
"Significance: It*s not convincing that an incremental improvement in the pretraining phase is so significant, for instance compared to developing a novel better architecture actually tailored to the dialogue task."
] | {
"criticism": 5,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 8,
"praise": 3,
"presentation_and_reporting": 4,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 11
} | 0.454545 | 0.090909 | 0.090909 | 0.727273 | 0.272727 | 0.363636 | 0.181818 | 0.090909 | 11 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.272727 | 2.209598 | 0.06313 |
A comparison of second-order methods for deep convolutional neural networks | Despite many second-order methods have been proposed to train neural networks, most of the results were done on smaller single layer fully connected networks, so we still cannot conclude whether it*s useful in training deep convolutional networks. In this study, we conduct extensive experiments to answer the question "whether second-order method is useful for deep learning?". In our analysis, we find out although currently second-order methods are too slow to be applied in practice, it can reduce training loss in fewer number of iterations compared with SGD. In addition, we have the following interesting findings: (1) When using a large batch size, inexact-Newton methods will converge much faster than SGD. Therefore inexact-Newton method could be a better choice in distributed training of deep networks. (2) Quasi-newton methods are competitive with SGD even when using ReLu activation function (which has no curvature) on residual networks. However, current methods are too sensitive to parameters and not easy to tune for different settings. Therefore, quasi-newton methods with more self-adjusting mechanisms might be more useful than SGD in training deeper networks. | This paper presents a comparative study on second-order optimization methods for CNNs. Overall, the topic is interesting and would be useful for the community. However, I think there are important issues about the paper: 1) The paper is not very well-written. The language is sometimes very informal, there are many grammatical mistakes and typos. The paper should be carefully proofread. 2) For such a comparative study, the number of algorithms and the number of datasets are quite little. The authors do not mention several important methods such as (not exhaustive) Schraudolph, N. N., Yu, J., and Günter, S. A stochastic quasi-Newton method for online convex optimization. Gurbuzbalaban et al, A globally convergent incremental Newton method (and other papers of the same authors) A Linearly-Convergent Stochastic L-BFGS Algorithm, Moritz et al 3) The experiment details are not provided. It is not clear what parameters are used and how. 4) There are some vague statements such as "this scheme does not work" or "fixed learning rates are not applicable". For instance, for the latter I cannot see a reason and the paper does not provide any convincing results. Even though the paper attempts to address an important point in deep learning, I do not believe that the presented results form evidence for such rather bold statements. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA comparison of second-order methods for deep convolutional neural networks\nDespite many second-order methods have been proposed to train neural networks, most of the results were done on smaller single layer fully connected networks, so we still cannot conclude whether it*s useful in training deep convolutional networks. In this study, we conduct extensive experiments to answer the question \"whether second-order method is useful for deep learning?\". In our analysis, we find out although currently second-order methods are too slow to be applied in practice, it can reduce training loss in fewer number of iterations compared with SGD. In addition, we have the following interesting findings: (1) When using a large batch size, inexact-Newton methods will converge much faster than SGD. Therefore inexact-Newton method could be a better choice in distributed training of deep networks. (2) Quasi-newton methods are competitive with SGD even when using ReLu activation function (which has no curvature) on residual networks. However, current methods are too sensitive to parameters and not easy to tune for different settings. Therefore, quasi-newton methods with more self-adjusting mechanisms might be more useful than SGD in training deeper networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents a comparative study on second-order optimization methods for CNNs. Overall, the topic is interesting and would be useful for the community. However, I think there are important issues about the paper: 1) The paper is not very well-written. The language is sometimes very informal, there are many grammatical mistakes and typos. The paper should be carefully proofread. 2) For such a comparative study, the number of algorithms and the number of datasets are quite little. The authors do not mention several important methods such as (not exhaustive) Schraudolph, N. N., Yu, J., and Günter, S. A stochastic quasi-Newton method for online convex optimization. Gurbuzbalaban et al, A globally convergent incremental Newton method (and other papers of the same authors) A Linearly-Convergent Stochastic L-BFGS Algorithm, Moritz et al 3) The experiment details are not provided. It is not clear what parameters are used and how. 4) There are some vague statements such as \"this scheme does not work\" or \"fixed learning rates are not applicable\". For instance, for the latter I cannot see a reason and the paper does not provide any convincing results. Even though the paper attempts to address an important point in deep learning, I do not believe that the presented results form evidence for such rather bold statements.",
"role": "assistant"
}
]
| [
" This paper presents a comparative study on second-order optimization methods for CNNs.",
"Overall, the topic is interesting and would be useful for the community.",
"However, I think there are important issues about the paper: 1) The paper is not very well-written.",
"The language is sometimes very informal, there are many grammatical mistakes and typos.",
"The paper should be carefully proofread.",
"2) For such a comparative study, the number of algorithms and the number of datasets are quite little.",
"The authors do not mention several important methods such as (not exhaustive) Schraudolph, N. N., Yu, J., and Günter, S. A stochastic quasi-Newton method for online convex optimization.",
"Gurbuzbalaban et al, A globally convergent incremental Newton method (and other papers of the same authors) A Linearly-Convergent Stochastic L-BFGS Algorithm, Moritz et al 3) The experiment details are not provided.",
"It is not clear what parameters are used and how.",
"4) There are some vague statements such as \"this scheme does not work\" or \"fixed learning rates are not applicable\".",
"For instance, for the latter I cannot see a reason and the paper does not provide any convincing results.",
"Even though the paper attempts to address an important point in deep learning, I do not believe that the presented results form evidence for such rather bold statements."
] | {
"criticism": 9,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 4,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 12
} | 0.75 | 0.083333 | 0.25 | 0.416667 | 0.166667 | 0.333333 | 0.166667 | 0.166667 | 12 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.317551 | 0.015782 |
Heterogeneous Bitwidth Binarization in Convolutional Neural Networks | Recent work has shown that performing inference with fast, very-low-bitwidth (e.g., 1 to 2 bits) representations of values in models can yield surprisingly accurate results. However, although 2-bit approximated networks have been shown to be quite accurate, 1 bit approximations, which are twice as fast, have restrictively low accuracy. We propose a method to train models whose weights are a mixture of bitwidths, that allows us to more finely tune the accuracy/speed trade-off. We present the “middle-out” criterion for determining the bitwidth for each value, and show how to integrate it into training models with a desired mixture of bitwidths. We evaluate several architectures and binarization techniques on the ImageNet dataset. We show that our heterogeneous bitwidth approximation achieves superlinear scaling of accuracy with bitwidth. Using an average of only 1.4 bits, we are able to outperform state-of-the-art 2-bit architectures. | This paper presents an extension of binary networks, and the main idea is to use different bit rates for different layers so we can further reduce bitrate of the overall net, and achieve better performance (speed / memory). The paper addresses a real problem which is meaningful, and provides interesting insights, but it is more of an extension. The description of the Heterogeneous Bitwidth Binarization algorithm is interesting and simple, and potentially can be practical, However it also adds more complication to real world implementations, and might not be an elegant enough approach for practical usages. Experiments wise, the paper has done solid experiments comparing with existing approaches and showed the gain. Results are promising. Overall, I am leaning towards a rejection mostly due to limited novelty. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHeterogeneous Bitwidth Binarization in Convolutional Neural Networks\nRecent work has shown that performing inference with fast, very-low-bitwidth (e.g., 1 to 2 bits) representations of values in models can yield surprisingly accurate results. However, although 2-bit approximated networks have been shown to be quite accurate, 1 bit approximations, which are twice as fast, have restrictively low accuracy. We propose a method to train models whose weights are a mixture of bitwidths, that allows us to more finely tune the accuracy/speed trade-off. We present the “middle-out” criterion for determining the bitwidth for each value, and show how to integrate it into training models with a desired mixture of bitwidths. We evaluate several architectures and binarization techniques on the ImageNet dataset. We show that our heterogeneous bitwidth approximation achieves superlinear scaling of accuracy with bitwidth. Using an average of only 1.4 bits, we are able to outperform state-of-the-art 2-bit architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents an extension of binary networks, and the main idea is to use different bit rates for different layers so we can further reduce bitrate of the overall net, and achieve better performance (speed / memory). The paper addresses a real problem which is meaningful, and provides interesting insights, but it is more of an extension. The description of the Heterogeneous Bitwidth Binarization algorithm is interesting and simple, and potentially can be practical, However it also adds more complication to real world implementations, and might not be an elegant enough approach for practical usages. Experiments wise, the paper has done solid experiments comparing with existing approaches and showed the gain. Results are promising. Overall, I am leaning towards a rejection mostly due to limited novelty.",
"role": "assistant"
}
]
| [
" This paper presents an extension of binary networks, and the main idea is to use different bit rates for different layers so we can further reduce bitrate of the overall net, and achieve better performance (speed / memory).",
"The paper addresses a real problem which is meaningful, and provides interesting insights, but it is more of an extension.",
"The description of the Heterogeneous Bitwidth Binarization algorithm is interesting and simple, and potentially can be practical, However it also adds more complication to real world implementations, and might not be an elegant enough approach for practical usages.",
"Experiments wise, the paper has done solid experiments comparing with existing approaches and showed the gain.",
"Results are promising.",
"Overall, I am leaning towards a rejection mostly due to limited novelty."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 6
} | 0.333333 | 0 | 0.333333 | 0.5 | 0.666667 | 0.166667 | 0.166667 | 0 | 6 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.166667 | 1.393328 | 0.773339 |
Finding ReMO (Related Memory Object): A Simple neural architecture for Text based Reasoning | Memory Network based models have shown a remarkable progress on the task of relational reasoning. Recently, a simpler yet powerful neural network module called Relation Network (RN) has been introduced. Despite its architectural simplicity, the time complexity of relation network grows quadratically with data, hence limiting its application to tasks with a large-scaled memory. We introduce Related Memory Network, an end-to-end neural network architecture exploiting both memory network and relation network structures. We follow memory network*s four components while each component operates similar to the relation network without taking a pair of objects. As a result, our model is as simple as RN but the computational complexity is reduced to linear time. It achieves the state-of-the-art results in jointly trained bAbI-10k story-based question answering and bAbI dialog dataset. | This paper proposes an alternative to the relation network architecture whose computational complexity is linear in the number of objects present in the input. The model achieves good results on bAbI compared to memory networks and the relation network model. From what I understood, it works by computing a weighted average of sentence representations in the input story where the attention weights are the output of an MLP whose input is just a sentence and question (not two sentences and a question). This average is then fed to a softmax layer for answer prediction. I found it difficult to understand how the model is related to relation networks, since it no longer scores every combination of objects (or, in the case of bAbI, sentences), which is the fundamental idea behind relation networks. Why is the approach not evaluated on CLEVR, in which the interaction between two objects is perhaps more critical (and was the main result of the original relation networks paper)? The fact that the model works well on bAbI despite its simplicity is interesting, but it feels like the paper is framed to suggest that object-object interactions are not necessary to explicitly model, which I can*t agree with based solely on bAbI experiments. I*d encourage the authors to do a more detailed experimental study with more tasks, but I can*t recommend this paper*s acceptance in its current form. other questions / comments: - "we use MLP to produce the attention weight without any extrinsic computation between the input sentence and the question." isn*t this statement false because the attention computation takes as input the concatenation of the question and sentence representation? - writing could be cleaned up for spelling / grammar (e.g., "last 70 stories" instead of "last 70 sentences"), currently the paper is very hard to read and it took me a while to understand the model | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFinding ReMO (Related Memory Object): A Simple neural architecture for Text based Reasoning\nMemory Network based models have shown a remarkable progress on the task of relational reasoning. Recently, a simpler yet powerful neural network module called Relation Network (RN) has been introduced. Despite its architectural simplicity, the time complexity of relation network grows quadratically with data, hence limiting its application to tasks with a large-scaled memory. We introduce Related Memory Network, an end-to-end neural network architecture exploiting both memory network and relation network structures. We follow memory network*s four components while each component operates similar to the relation network without taking a pair of objects. As a result, our model is as simple as RN but the computational complexity is reduced to linear time. It achieves the state-of-the-art results in jointly trained bAbI-10k story-based question answering and bAbI dialog dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes an alternative to the relation network architecture whose computational complexity is linear in the number of objects present in the input. The model achieves good results on bAbI compared to memory networks and the relation network model. From what I understood, it works by computing a weighted average of sentence representations in the input story where the attention weights are the output of an MLP whose input is just a sentence and question (not two sentences and a question). This average is then fed to a softmax layer for answer prediction. I found it difficult to understand how the model is related to relation networks, since it no longer scores every combination of objects (or, in the case of bAbI, sentences), which is the fundamental idea behind relation networks. Why is the approach not evaluated on CLEVR, in which the interaction between two objects is perhaps more critical (and was the main result of the original relation networks paper)? The fact that the model works well on bAbI despite its simplicity is interesting, but it feels like the paper is framed to suggest that object-object interactions are not necessary to explicitly model, which I can*t agree with based solely on bAbI experiments. I*d encourage the authors to do a more detailed experimental study with more tasks, but I can*t recommend this paper*s acceptance in its current form. other questions / comments: - \"we use MLP to produce the attention weight without any extrinsic computation between the input sentence and the question.\" isn*t this statement false because the attention computation takes as input the concatenation of the question and sentence representation? - writing could be cleaned up for spelling / grammar (e.g., \"last 70 stories\" instead of \"last 70 sentences\"), currently the paper is very hard to read and it took me a while to understand the model",
"role": "assistant"
}
]
| [
" This paper proposes an alternative to the relation network architecture whose computational complexity is linear in the number of objects present in the input.",
"The model achieves good results on bAbI compared to memory networks and the relation network model.",
"From what I understood, it works by computing a weighted average of sentence representations in the input story where the attention weights are the output of an MLP whose input is just a sentence and question (not two sentences and a question).",
"This average is then fed to a softmax layer for answer prediction.",
"I found it difficult to understand how the model is related to relation networks, since it no longer scores every combination of objects (or, in the case of bAbI, sentences), which is the fundamental idea behind relation networks.",
"Why is the approach not evaluated on CLEVR, in which the interaction between two objects is perhaps more critical (and was the main result of the original relation networks paper)?",
"The fact that the model works well on bAbI despite its simplicity is interesting, but it feels like the paper is framed to suggest that object-object interactions are not necessary to explicitly model, which I can*t agree with based solely on bAbI experiments.",
"I*d encourage the authors to do a more detailed experimental study with more tasks, but I can*t recommend this paper*s acceptance in its current form.",
"other questions / comments: - \"we use MLP to produce the attention weight without any extrinsic computation between the input sentence and the question.\"",
"isn*t this statement false because the attention computation takes as input the concatenation of the question and sentence representation?",
"- writing could be cleaned up for spelling / grammar (e.g., \"last 70 stories\" instead of \"last 70 sentences\"), currently the paper is very hard to read and it took me a while to understand the model"
] | {
"criticism": 4,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 2,
"presentation_and_reporting": 4,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 11
} | 0.363636 | 0.181818 | 0.181818 | 0.818182 | 0.181818 | 0.363636 | 0.272727 | 0.090909 | 11 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.454545 | 2.391416 | 0.06313 |
Classifier-to-Generator Attack: Estimation of Training Data Distribution from Classifier | Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons. In this setting, can an adversary obtain the private samples if the classification model is given to the adversary? We call this reverse engineering against the classification model the Classifier-to-Generator (C2G) Attack. This situation arises when the classification model is embedded into mobile devices for offline prediction (e.g., object recognition for the automatic driving car and face recognition for mobile phone authentication). For C2G attack, we introduce a novel GAN, PreImageGAN. In PreImageGAN, the generator is designed to estimate the the sample distribution conditioned by the preimage of classification model , , where is the random variable on the sample space and is the probability vector representing the target label arbitrary specified by the adversary. In experiments, we demonstrate PreImageGAN works successfully with hand-written character recognition and face recognition. In character recognition, we show that, given a recognition model of hand-written digits, PreImageGAN allows the adversary to extract alphabet letter images without knowing that the model is built for alphabet letter images. In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals. | The paper proposes the use of a GAN to learn the distribution of image classes from an existing classifier, that is a nice and straightforward idea. From the point of view of forensic analysis of a classifier, it supposes a more principled strategy than a brute force attack based on the classification of a database and some conditional density estimation of some intermediate image features. Unfortunately, the experiments are inconclusive. Quality: The key question of the proposed scheme is the role of the auxiliary dataset. In the EMNIST experiment, the results for the “exact same” and “partly same” situations are good, but it seems that for the “mutually exclusive” situation the generated samples look like letters, not numbers, and raises the question on the interpolation ability of the generator. In the FaceScrub experiment is even more difficult to interpret the results, basically because we do not even know the full list of person identities. It seems that generated images contain only parts of the auxiliary images related to the most discriminative features of the given classifier. Does this imply that the GAN models a biased probability distribution of the image class? What is the result when the auxiliary dataset comes from a different kind of images? Due to the difficulty of evaluating GAN results, more experiments are needed to determine the quality and significance of this work. Clarity: The paper is well structured and written, but Sections 1-4 could be significantly shorter to leave more space to additional and more conclusive experiments. Some typos on Appendix A should be corrected. Originality: the paper is based on a very smart and interesting idea and a straightforward use of GANs. Significance: If additional simulations confirm the author’s claims, this work can represent a significant contribution to the forensic analysis of discriminative classifiers. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nClassifier-to-Generator Attack: Estimation of Training Data Distribution from Classifier\nSuppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons. In this setting, can an adversary obtain the private samples if the classification model is given to the adversary? We call this reverse engineering against the classification model the Classifier-to-Generator (C2G) Attack. This situation arises when the classification model is embedded into mobile devices for offline prediction (e.g., object recognition for the automatic driving car and face recognition for mobile phone authentication). For C2G attack, we introduce a novel GAN, PreImageGAN. In PreImageGAN, the generator is designed to estimate the the sample distribution conditioned by the preimage of classification model , , where is the random variable on the sample space and is the probability vector representing the target label arbitrary specified by the adversary. In experiments, we demonstrate PreImageGAN works successfully with hand-written character recognition and face recognition. In character recognition, we show that, given a recognition model of hand-written digits, PreImageGAN allows the adversary to extract alphabet letter images without knowing that the model is built for alphabet letter images. In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes the use of a GAN to learn the distribution of image classes from an existing classifier, that is a nice and straightforward idea. From the point of view of forensic analysis of a classifier, it supposes a more principled strategy than a brute force attack based on the classification of a database and some conditional density estimation of some intermediate image features. Unfortunately, the experiments are inconclusive. Quality: The key question of the proposed scheme is the role of the auxiliary dataset. In the EMNIST experiment, the results for the “exact same” and “partly same” situations are good, but it seems that for the “mutually exclusive” situation the generated samples look like letters, not numbers, and raises the question on the interpolation ability of the generator. In the FaceScrub experiment is even more difficult to interpret the results, basically because we do not even know the full list of person identities. It seems that generated images contain only parts of the auxiliary images related to the most discriminative features of the given classifier. Does this imply that the GAN models a biased probability distribution of the image class? What is the result when the auxiliary dataset comes from a different kind of images? Due to the difficulty of evaluating GAN results, more experiments are needed to determine the quality and significance of this work. Clarity: The paper is well structured and written, but Sections 1-4 could be significantly shorter to leave more space to additional and more conclusive experiments. Some typos on Appendix A should be corrected. Originality: the paper is based on a very smart and interesting idea and a straightforward use of GANs. Significance: If additional simulations confirm the author’s claims, this work can represent a significant contribution to the forensic analysis of discriminative classifiers.",
"role": "assistant"
}
]
| [
" The paper proposes the use of a GAN to learn the distribution of image classes from an existing classifier, that is a nice and straightforward idea.",
"From the point of view of forensic analysis of a classifier, it supposes a more principled strategy than a brute force attack based on the classification of a database and some conditional density estimation of some intermediate image features.",
"Unfortunately, the experiments are inconclusive.",
"Quality: The key question of the proposed scheme is the role of the auxiliary dataset.",
"In the EMNIST experiment, the results for the “exact same” and “partly same” situations are good, but it seems that for the “mutually exclusive” situation the generated samples look like letters, not numbers, and raises the question on the interpolation ability of the generator.",
"In the FaceScrub experiment is even more difficult to interpret the results, basically because we do not even know the full list of person identities.",
"It seems that generated images contain only parts of the auxiliary images related to the most discriminative features of the given classifier.",
"Does this imply that the GAN models a biased probability distribution of the image class?",
"What is the result when the auxiliary dataset comes from a different kind of images?",
"Due to the difficulty of evaluating GAN results, more experiments are needed to determine the quality and significance of this work.",
"Clarity: The paper is well structured and written, but Sections 1-4 could be significantly shorter to leave more space to additional and more conclusive experiments.",
"Some typos on Appendix A should be corrected.",
"Originality: the paper is based on a very smart and interesting idea and a straightforward use of GANs.",
"Significance: If additional simulations confirm the author’s claims, this work can represent a significant contribution to the forensic analysis of discriminative classifiers."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 11,
"praise": 5,
"presentation_and_reporting": 6,
"results_and_discussion": 6,
"suggestion_and_solution": 3,
"total": 14
} | 0.142857 | 0 | 0.214286 | 0.785714 | 0.357143 | 0.428571 | 0.428571 | 0.214286 | 14 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.571429 | 2.555646 | 0.015782 |
A closer look at the word analogy problem | Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems. In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns. My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted. Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does. | This paper proposes a method to solve the *word analogy problem*, which was proposed as a way of understanding and evaluating word embeddings by Mikolov et al. There are some nice analyses in the paper which, if better organised, could lead to an improved understanding of semantic word spaces in neural nets. comments: The word analogy task was developed as an interesting way to analyse and understand word embedding spaces, but motivation for learning word embeddings was as general-purpose representations for language processing tasks (as in collobert et al, 2011), not as a way of resolving analogy questions. The authors develop a specialist method for resolving analogies, and it works (mostly) better than using the additive geometry of word-embedding spaces. But I don*t think that comparison is *fair* - the analogy thing is just a side-effect of word embedding spaces. Given that the authors focus on the word-analogy problem as an end in itself, I think there should be much more justification of why this is a useful problem to solve. Analogy seems to be fundamental to human cognition and reasoning, so maybe that is part of the reason, but it*s not clear to the reader. The algorithm seems to be simple and intuitive, but the presentation is overly formal and unclear. It would be much easier for the reader to simply put into plain terms what the algorithm does. Using a POS-tagger to strip out nouns is a form of supervision (the pos-tagger was trained on labelled data) that word-embedding methods do not use, which should at least be acknowledged when making a comparison. Similarly, it is nice that the present method works on less data, but the beauty of word embeddings is that they can be trained on any text - i.e. data is not a problem, and *work* for any word type. Stripping away everything but nouns clearly allows co-occurrence semantic patterns to emerge from less data, but at the cost of the supervision mentioned above. Moreover, I suspect that the use of wikipedia is important for the proposed algorithm, as the pertinent relations are often explicit in the first sentence of articles "Paris the largest city and capital of France...". Would the same method work on any text? I would expect this question to be explored, even if the answer is negative. The goal of understanding word2vec and embedding spaces in general (section 5) is a really important one (as it can tell us a lot about how language and meaning is encoded in deep learning models in general), and I think that*s one of the strongest aspects of this work. However, the conclusions from this section (and other related conclusions in other sections) are a little unclear to me. Perhaps that is because I don*t quite get algorithm 3, which would be mitigated by an intuitive explanation to complement the pseudocode. I*m also confused by the assertion that Vec(A) - Vec(B) conveys the *common information* in A and B. How can a non-symmetric operation convey *common information*. Surely it conveys something about the relationship between A and B? Minor point: "may not the be indicative of the model*s ability to learn the relationship between a word pair the way a human does" (Abstract) - I*m not sure we know how humans learn the relationships between word pairs. Are you referring to formal semantic relations i.e. in taxonomies in WordNet? This sentence seems dangerous, and the claim about humans is not really treated in the article itself. The a+cknowledgements compromise the anonymity of the authors. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA closer look at the word analogy problem\nAlthough word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems. In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns. My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted. Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a method to solve the *word analogy problem*, which was proposed as a way of understanding and evaluating word embeddings by Mikolov et al. There are some nice analyses in the paper which, if better organised, could lead to an improved understanding of semantic word spaces in neural nets. comments: The word analogy task was developed as an interesting way to analyse and understand word embedding spaces, but motivation for learning word embeddings was as general-purpose representations for language processing tasks (as in collobert et al, 2011), not as a way of resolving analogy questions. The authors develop a specialist method for resolving analogies, and it works (mostly) better than using the additive geometry of word-embedding spaces. But I don*t think that comparison is *fair* - the analogy thing is just a side-effect of word embedding spaces. Given that the authors focus on the word-analogy problem as an end in itself, I think there should be much more justification of why this is a useful problem to solve. Analogy seems to be fundamental to human cognition and reasoning, so maybe that is part of the reason, but it*s not clear to the reader. The algorithm seems to be simple and intuitive, but the presentation is overly formal and unclear. It would be much easier for the reader to simply put into plain terms what the algorithm does. Using a POS-tagger to strip out nouns is a form of supervision (the pos-tagger was trained on labelled data) that word-embedding methods do not use, which should at least be acknowledged when making a comparison. Similarly, it is nice that the present method works on less data, but the beauty of word embeddings is that they can be trained on any text - i.e. data is not a problem, and *work* for any word type. Stripping away everything but nouns clearly allows co-occurrence semantic patterns to emerge from less data, but at the cost of the supervision mentioned above. Moreover, I suspect that the use of wikipedia is important for the proposed algorithm, as the pertinent relations are often explicit in the first sentence of articles \"Paris the largest city and capital of France...\". Would the same method work on any text? I would expect this question to be explored, even if the answer is negative. The goal of understanding word2vec and embedding spaces in general (section 5) is a really important one (as it can tell us a lot about how language and meaning is encoded in deep learning models in general), and I think that*s one of the strongest aspects of this work. However, the conclusions from this section (and other related conclusions in other sections) are a little unclear to me. Perhaps that is because I don*t quite get algorithm 3, which would be mitigated by an intuitive explanation to complement the pseudocode. I*m also confused by the assertion that Vec(A) - Vec(B) conveys the *common information* in A and B. How can a non-symmetric operation convey *common information*. Surely it conveys something about the relationship between A and B? Minor point: \"may not the be indicative of the model*s ability to learn the relationship between a word pair the way a human does\" (Abstract) - I*m not sure we know how humans learn the relationships between word pairs. Are you referring to formal semantic relations i.e. in taxonomies in WordNet? This sentence seems dangerous, and the claim about humans is not really treated in the article itself. The a+cknowledgements compromise the anonymity of the authors.",
"role": "assistant"
}
]
| [
" This paper proposes a method to solve the *word analogy problem*, which was proposed as a way of understanding and evaluating word embeddings by Mikolov et al.",
"There are some nice analyses in the paper which, if better organised, could lead to an improved understanding of semantic word spaces in neural nets.",
"comments: The word analogy task was developed as an interesting way to analyse and understand word embedding spaces, but motivation for learning word embeddings was as general-purpose representations for language processing tasks (as in collobert et al, 2011), not as a way of resolving analogy questions.",
"The authors develop a specialist method for resolving analogies, and it works (mostly) better than using the additive geometry of word-embedding spaces.",
"But I don*t think that comparison is *fair* - the analogy thing is just a side-effect of word embedding spaces.",
"Given that the authors focus on the word-analogy problem as an end in itself, I think there should be much more justification of why this is a useful problem to solve.",
"Analogy seems to be fundamental to human cognition and reasoning, so maybe that is part of the reason, but it*s not clear to the reader.",
"The algorithm seems to be simple and intuitive, but the presentation is overly formal and unclear.",
"It would be much easier for the reader to simply put into plain terms what the algorithm does.",
"Using a POS-tagger to strip out nouns is a form of supervision (the pos-tagger was trained on labelled data) that word-embedding methods do not use, which should at least be acknowledged when making a comparison.",
"Similarly, it is nice that the present method works on less data, but the beauty of word embeddings is that they can be trained on any text - i.e.",
"data is not a problem, and *work* for any word type.",
"Stripping away everything but nouns clearly allows co-occurrence semantic patterns to emerge from less data, but at the cost of the supervision mentioned above.",
"Moreover, I suspect that the use of wikipedia is important for the proposed algorithm, as the pertinent relations are often explicit in the first sentence of articles \"Paris the largest city and capital of France...\".",
"Would the same method work on any text?",
"I would expect this question to be explored, even if the answer is negative.",
"The goal of understanding word2vec and embedding spaces in general (section 5) is a really important one (as it can tell us a lot about how language and meaning is encoded in deep learning models in general), and I think that*s one of the strongest aspects of this work.",
"However, the conclusions from this section (and other related conclusions in other sections) are a little unclear to me.",
"Perhaps that is because I don*t quite get algorithm 3, which would be mitigated by an intuitive explanation to complement the pseudocode.",
"I*m also confused by the assertion that Vec(A) - Vec(B) conveys the *common information* in A and B.",
"How can a non-symmetric operation convey *common information*.",
"Surely it conveys something about the relationship between A and B?",
"Minor point: \"may not the be indicative of the model*s ability to learn the relationship between a word pair the way a human does\" (Abstract) - I*m not sure we know how humans learn the relationships between word pairs.",
"Are you referring to formal semantic relations i.e.",
"in taxonomies in WordNet?",
"This sentence seems dangerous, and the claim about humans is not really treated in the article itself.",
"The a+cknowledgements compromise the anonymity of the authors."
] | {
"criticism": 9,
"example": 3,
"importance_and_relevance": 4,
"materials_and_methods": 13,
"praise": 3,
"presentation_and_reporting": 18,
"results_and_discussion": 3,
"suggestion_and_solution": 5,
"total": 27
} | 0.333333 | 0.111111 | 0.148148 | 0.481481 | 0.111111 | 0.666667 | 0.111111 | 0.185185 | 27 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.148148 | -0.945208 | 3.093356 |
A closer look at the word analogy problem | Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems. In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns. My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted. Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does. | This paper presents, and analyzes, a method for learning word relationships based on co-occurrence. In the method, relationships between pairs of words (A, B) are represented by the terms that tend to occur around co-mentions of A and B in text. The paper shows the start of some interesting ideas, but needs revisions and much more extensive experiments. On the plus side, the method proposed here does perform relatively well (Table 1) and probably merits further investigation. The experiments in Table 1 can only be considered preliminary, however. They only evaluate over a small number of relationships (three) -- looking at 20 or so different relationships would greatly improve confidence in the conclusions. Beyond Table 1 the paper makes a number of claims that are not supported or weakly supported (the paper uses only a handful of examples as evidence). An attempt to explain what Word2Vec is doing should be made with careful experiments over many relations and hundreds of examples, whereas this paper presents only a handful of examples for most of its claims. Further, whether the behavior of the proposed algorithm actually reflects what word2vec is doing is left as a significant open question. I appreciate the clarity of Assumption 1 and Proposition 1, but ultimately this formalism is not used and because Assumption 1 about which nouns are "semantically related" to which other nouns attempts to trivialize a complex notion (semantics) and is clearly way too strong -- the paper would be better off without it. Also Assumption 1 does not actually claim what the text says it claims (the text says words outside the window are *not* semantically related, but the assumption does not actually say this) and furthermore is soon discarded and only the frequency of noun occurrences around co-mentions is used. I think the description of the algorithm could be retained without including Assumption 1. minor: References to numbered algorithms or assumptions should be capitalized in the text. what the introduction means about the "dynamics" of the vector equation is a little unclear A submission shouldn*t have acknowledgments, and in particular with names that undermine anonymity MLE has a particular technical meaning that is not utilized here, I would just refer to the most frequent words as "most related nouns" or similar In Table 1, are the "same dataset" results with w2v for the nouns-only corpus, or with all the other words? The argument made assuming a perfect Zipf distribution (with exponent equal to one) should be made with data. will likely by observed -> will likely be observed lions:dolphins probably ends up that way because of "sea lions" Table 4 caption: frequencies -> currencies Table 2 -- claim is that improvements from k=10 to k=20 are *nominal* but they look non-negligible to me I did not understand how POS lying in the same subspace means that Vec(D) has to be in the span of Vecs A-C. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA closer look at the word analogy problem\nAlthough word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems. In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns. My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted. Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents, and analyzes, a method for learning word relationships based on co-occurrence. In the method, relationships between pairs of words (A, B) are represented by the terms that tend to occur around co-mentions of A and B in text. The paper shows the start of some interesting ideas, but needs revisions and much more extensive experiments. On the plus side, the method proposed here does perform relatively well (Table 1) and probably merits further investigation. The experiments in Table 1 can only be considered preliminary, however. They only evaluate over a small number of relationships (three) -- looking at 20 or so different relationships would greatly improve confidence in the conclusions. Beyond Table 1 the paper makes a number of claims that are not supported or weakly supported (the paper uses only a handful of examples as evidence). An attempt to explain what Word2Vec is doing should be made with careful experiments over many relations and hundreds of examples, whereas this paper presents only a handful of examples for most of its claims. Further, whether the behavior of the proposed algorithm actually reflects what word2vec is doing is left as a significant open question. I appreciate the clarity of Assumption 1 and Proposition 1, but ultimately this formalism is not used and because Assumption 1 about which nouns are \"semantically related\" to which other nouns attempts to trivialize a complex notion (semantics) and is clearly way too strong -- the paper would be better off without it. Also Assumption 1 does not actually claim what the text says it claims (the text says words outside the window are *not* semantically related, but the assumption does not actually say this) and furthermore is soon discarded and only the frequency of noun occurrences around co-mentions is used. I think the description of the algorithm could be retained without including Assumption 1. minor: References to numbered algorithms or assumptions should be capitalized in the text. what the introduction means about the \"dynamics\" of the vector equation is a little unclear A submission shouldn*t have acknowledgments, and in particular with names that undermine anonymity MLE has a particular technical meaning that is not utilized here, I would just refer to the most frequent words as \"most related nouns\" or similar In Table 1, are the \"same dataset\" results with w2v for the nouns-only corpus, or with all the other words? The argument made assuming a perfect Zipf distribution (with exponent equal to one) should be made with data. will likely by observed -> will likely be observed lions:dolphins probably ends up that way because of \"sea lions\" Table 4 caption: frequencies -> currencies Table 2 -- claim is that improvements from k=10 to k=20 are *nominal* but they look non-negligible to me I did not understand how POS lying in the same subspace means that Vec(D) has to be in the span of Vecs A-C.",
"role": "assistant"
}
]
| [
" This paper presents, and analyzes, a method for learning word relationships based on co-occurrence.",
"In the method, relationships between pairs of words (A, B) are represented by the terms that tend to occur around co-mentions of A and B in text.",
"The paper shows the start of some interesting ideas, but needs revisions and much more extensive experiments.",
"On the plus side, the method proposed here does perform relatively well (Table 1) and probably merits further investigation.",
"The experiments in Table 1 can only be considered preliminary, however.",
"They only evaluate over a small number of relationships (three) -- looking at 20 or so different relationships would greatly improve confidence in the conclusions.",
"Beyond Table 1 the paper makes a number of claims that are not supported or weakly supported (the paper uses only a handful of examples as evidence).",
"An attempt to explain what Word2Vec is doing should be made with careful experiments over many relations and hundreds of examples, whereas this paper presents only a handful of examples for most of its claims.",
"Further, whether the behavior of the proposed algorithm actually reflects what word2vec is doing is left as a significant open question.",
"I appreciate the clarity of Assumption 1 and Proposition 1, but ultimately this formalism is not used and because Assumption 1 about which nouns are \"semantically related\" to which other nouns attempts to trivialize a complex notion (semantics) and is clearly way too strong -- the paper would be better off without it.",
"Also Assumption 1 does not actually claim what the text says it claims (the text says words outside the window are *not* semantically related, but the assumption does not actually say this) and furthermore is soon discarded and only the frequency of noun occurrences around co-mentions is used.",
"I think the description of the algorithm could be retained without including Assumption 1. minor: References to numbered algorithms or assumptions should be capitalized in the text.",
"what the introduction means about the \"dynamics\" of the vector equation is a little unclear A submission shouldn*t have acknowledgments, and in particular with names that undermine anonymity MLE has a particular technical meaning that is not utilized here, I would just refer to the most frequent words as \"most related nouns\" or similar In Table 1, are the \"same dataset\" results with w2v for the nouns-only corpus, or with all the other words?",
"The argument made assuming a perfect Zipf distribution (with exponent equal to one) should be made with data.",
"will likely by observed -> will likely be observed lions:dolphins probably ends up that way because of \"sea lions\" Table 4 caption: frequencies -> currencies Table 2 -- claim is that improvements from k=10 to k=20 are *nominal* but they look non-negligible to me I did not understand how POS lying in the same subspace means that Vec(D) has to be in the span of Vecs A-C."
] | {
"criticism": 4,
"example": 3,
"importance_and_relevance": 1,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 8,
"results_and_discussion": 2,
"suggestion_and_solution": 8,
"total": 15
} | 0.266667 | 0.2 | 0.066667 | 0.733333 | 0.2 | 0.533333 | 0.133333 | 0.533333 | 15 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.666667 | 2.603537 | 0.06313 |
Autoregressive Generative Adversarial Networks | Generative Adversarial Networks (GANs) learn a generative model by playing an adversarial game between a generator and an auxiliary discriminator, which classifies data samples vs. generated ones. However, it does not explicitly model feature co-occurrences in samples. In this paper, we propose a novel Autoregressive Generative Adversarial Network (ARGAN), that models the latent distribution of data using an autoregressive model, rather than relying on binary classification of samples into data/generated categories. In this way, feature co-occurrences in samples can be more efficiently captured. Our model was evaluated on two widely used datasets: CIFAR-10 and STL-10. Its performance is competitive with respect to other GAN models both quantitatively and qualitatively. | This paper proposes an alternative GAN formulation that replaces the standard binary classification task in the discriminator with a autoregressive model that attempts to capture discriminative feature dependencies on the true data samples. Summary assessment: The paper presents a novel perspective on GANs and an interesting conjecture regarding the failure of GANs to capture global consistency. However the experiments do not directly support this conjecture. In addition, both qualitative and quantitative results to not provide significant evidence of the value of this technique over and above the establish methods in the literature. The central motivation of the method proposed in the paper, is a conjecture that the lack of global consistency in GAN-generated samples is due to the binary classification formulation of the discriminator. While this is an interesting conjecture, I am somewhat unconvinced that this is indeed the cause of the problem. First, I would argue that other high-performing auto-regressive models such as PixelRNN and PixelCNN also seem to lack global consistency. This observation would seem to violate this conjecture. More importantly, the paper does not show any direct empirical evidence in support of this conjecture. The authors make a very interesting observation in their description of the proposed approach. In discussing an initial variant of the model (Eqns. (5) and (6) and text immediately below), the authors state that attempting to maximize the negative log likelihood of the auto-regressive modelling the generated samples results in unstable training. I would like to see more discussion of this point as it could have some bearing on the difficulty of GAN to model sequential data in general. Does the failure occurs because the auto-regressive discriminator is able to "overfit" the generated samples? As a result of the observed failure of the formulation given in Eqns. (5) and (6), the authors propose an alternative formulation that explicitly removes the negative likelihood maximization for generated samples. As a result the only objective for the auto-regressive model is an attempt to maximize the log-likelihood of the true data. The authors suggest that this should be sufficient to provide a reliable training signal for the generator. It would be useful if the authors showed a representation of these features (perhaps via T-SNE) for both true data and generated samples. Empirical results: The authors experiments show samples (qualitative comparison) and inception scores (quantitative comparison) for 3 variants of the proposed model and compare these to methods in the literature. The comparisons show the proposed model preforms well, but does not exceed the performance of many of the existing methods in the literature. Also, I fail to observe significantly more global consistency for these samples compared to samples of other SOTA GAN models in the literature. Again, there is no attempt made by the authors to make this direct comparison of global consistency either qualitatively or quantitatively. Minor comment: I did not see where PARGAN was defined. Was this the combination of the Auto-regressive GAN with Patch-GAN? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAutoregressive Generative Adversarial Networks\nGenerative Adversarial Networks (GANs) learn a generative model by playing an adversarial game between a generator and an auxiliary discriminator, which classifies data samples vs. generated ones. However, it does not explicitly model feature co-occurrences in samples. In this paper, we propose a novel Autoregressive Generative Adversarial Network (ARGAN), that models the latent distribution of data using an autoregressive model, rather than relying on binary classification of samples into data/generated categories. In this way, feature co-occurrences in samples can be more efficiently captured. Our model was evaluated on two widely used datasets: CIFAR-10 and STL-10. Its performance is competitive with respect to other GAN models both quantitatively and qualitatively.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes an alternative GAN formulation that replaces the standard binary classification task in the discriminator with a autoregressive model that attempts to capture discriminative feature dependencies on the true data samples. Summary assessment: The paper presents a novel perspective on GANs and an interesting conjecture regarding the failure of GANs to capture global consistency. However the experiments do not directly support this conjecture. In addition, both qualitative and quantitative results to not provide significant evidence of the value of this technique over and above the establish methods in the literature. The central motivation of the method proposed in the paper, is a conjecture that the lack of global consistency in GAN-generated samples is due to the binary classification formulation of the discriminator. While this is an interesting conjecture, I am somewhat unconvinced that this is indeed the cause of the problem. First, I would argue that other high-performing auto-regressive models such as PixelRNN and PixelCNN also seem to lack global consistency. This observation would seem to violate this conjecture. More importantly, the paper does not show any direct empirical evidence in support of this conjecture. The authors make a very interesting observation in their description of the proposed approach. In discussing an initial variant of the model (Eqns. (5) and (6) and text immediately below), the authors state that attempting to maximize the negative log likelihood of the auto-regressive modelling the generated samples results in unstable training. I would like to see more discussion of this point as it could have some bearing on the difficulty of GAN to model sequential data in general. Does the failure occurs because the auto-regressive discriminator is able to \"overfit\" the generated samples? As a result of the observed failure of the formulation given in Eqns. (5) and (6), the authors propose an alternative formulation that explicitly removes the negative likelihood maximization for generated samples. As a result the only objective for the auto-regressive model is an attempt to maximize the log-likelihood of the true data. The authors suggest that this should be sufficient to provide a reliable training signal for the generator. It would be useful if the authors showed a representation of these features (perhaps via T-SNE) for both true data and generated samples. Empirical results: The authors experiments show samples (qualitative comparison) and inception scores (quantitative comparison) for 3 variants of the proposed model and compare these to methods in the literature. The comparisons show the proposed model preforms well, but does not exceed the performance of many of the existing methods in the literature. Also, I fail to observe significantly more global consistency for these samples compared to samples of other SOTA GAN models in the literature. Again, there is no attempt made by the authors to make this direct comparison of global consistency either qualitatively or quantitatively. Minor comment: I did not see where PARGAN was defined. Was this the combination of the Auto-regressive GAN with Patch-GAN?",
"role": "assistant"
}
]
| [
" This paper proposes an alternative GAN formulation that replaces the standard binary classification task in the discriminator with a autoregressive model that attempts to capture discriminative feature dependencies on the true data samples.",
"Summary assessment: The paper presents a novel perspective on GANs and an interesting conjecture regarding the failure of GANs to capture global consistency.",
"However the experiments do not directly support this conjecture.",
"In addition, both qualitative and quantitative results to not provide significant evidence of the value of this technique over and above the establish methods in the literature.",
"The central motivation of the method proposed in the paper, is a conjecture that the lack of global consistency in GAN-generated samples is due to the binary classification formulation of the discriminator.",
"While this is an interesting conjecture, I am somewhat unconvinced that this is indeed the cause of the problem.",
"First, I would argue that other high-performing auto-regressive models such as PixelRNN and PixelCNN also seem to lack global consistency.",
"This observation would seem to violate this conjecture.",
"More importantly, the paper does not show any direct empirical evidence in support of this conjecture.",
"The authors make a very interesting observation in their description of the proposed approach.",
"In discussing an initial variant of the model (Eqns.",
"(5) and (6) and text immediately below), the authors state that attempting to maximize the negative log likelihood of the auto-regressive modelling the generated samples results in unstable training.",
"I would like to see more discussion of this point as it could have some bearing on the difficulty of GAN to model sequential data in general.",
"Does the failure occurs because the auto-regressive discriminator is able to \"overfit\" the generated samples?",
"As a result of the observed failure of the formulation given in Eqns.",
"(5) and (6), the authors propose an alternative formulation that explicitly removes the negative likelihood maximization for generated samples.",
"As a result the only objective for the auto-regressive model is an attempt to maximize the log-likelihood of the true data.",
"The authors suggest that this should be sufficient to provide a reliable training signal for the generator.",
"It would be useful if the authors showed a representation of these features (perhaps via T-SNE) for both true data and generated samples.",
"Empirical results: The authors experiments show samples (qualitative comparison) and inception scores (quantitative comparison) for 3 variants of the proposed model and compare these to methods in the literature.",
"The comparisons show the proposed model preforms well, but does not exceed the performance of many of the existing methods in the literature.",
"Also, I fail to observe significantly more global consistency for these samples compared to samples of other SOTA GAN models in the literature.",
"Again, there is no attempt made by the authors to make this direct comparison of global consistency either qualitatively or quantitatively.",
"Minor comment: I did not see where PARGAN was defined.",
"Was this the combination of the Auto-regressive GAN with Patch-GAN?"
] | {
"criticism": 10,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 18,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 13,
"suggestion_and_solution": 3,
"total": 25
} | 0.4 | 0 | 0.2 | 0.72 | 0.12 | 0.12 | 0.52 | 0.12 | 25 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | -0.07267 | 2.27267 |
Autoregressive Generative Adversarial Networks | Generative Adversarial Networks (GANs) learn a generative model by playing an adversarial game between a generator and an auxiliary discriminator, which classifies data samples vs. generated ones. However, it does not explicitly model feature co-occurrences in samples. In this paper, we propose a novel Autoregressive Generative Adversarial Network (ARGAN), that models the latent distribution of data using an autoregressive model, rather than relying on binary classification of samples into data/generated categories. In this way, feature co-occurrences in samples can be more efficiently captured. Our model was evaluated on two widely used datasets: CIFAR-10 and STL-10. Its performance is competitive with respect to other GAN models both quantitatively and qualitatively. | This paper proposes a new GAN model whereby the discriminator (rather than being a binary classifier) consists of an encoding network followed by an autoregressive model on the encoded features. The discriminator is trained to maximize the probability of the true data and minimize the probability of the generated samples. The authors also propose a version that combines this autoregressive discriminator with a patchGAN discriminator. The authors train this model on cifar10 and stl10 and show reasonable generations and inception scores, comparing the latter with existing approaches. Pros: This discriminator architecture is well motivated, intuitive and novel. the samples are good (though not better than existing approaches as far as I can tell). The paper is also well written and easy to read. Cons: As is commonly the case with GAN models, it is difficult to assess the advantage of this approach over exiting techniques. The samples generated form this model look fine, but not better than existing samples. The inception scores are ok, but seem to be outperformed by other models (though this shouldn*t necessarily be taken as a critique of the approach presented here as inception scores are an approximation to what we care about and we should not be trying to tune models for better inception scores). Detailed comments: - In terms of experiments, I think think paper is missing the following: (1) An additional dataset -- cifar and stl10 are very similar, a face dataset for example would be good to see and is commonly used in GAN papers. (2) the authors claim their method is stable, so it would be good to see quantitative results backing this claim, i.e. sweeps over hyper-parameters / encoding/generator architectures with evaluations for different settings. - the idea of having some form of recurrent (either over channels of spatially) processing in the discriminator seems more general that the specific proposal given here. Could the authors say a bit more about what they think the effects of adding recurrence in the discriminator vs optimizing the likelihood of the features under the autoregressive model? Ultimately, the approach is interesting but there is not enough empirical evaluations. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAutoregressive Generative Adversarial Networks\nGenerative Adversarial Networks (GANs) learn a generative model by playing an adversarial game between a generator and an auxiliary discriminator, which classifies data samples vs. generated ones. However, it does not explicitly model feature co-occurrences in samples. In this paper, we propose a novel Autoregressive Generative Adversarial Network (ARGAN), that models the latent distribution of data using an autoregressive model, rather than relying on binary classification of samples into data/generated categories. In this way, feature co-occurrences in samples can be more efficiently captured. Our model was evaluated on two widely used datasets: CIFAR-10 and STL-10. Its performance is competitive with respect to other GAN models both quantitatively and qualitatively.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a new GAN model whereby the discriminator (rather than being a binary classifier) consists of an encoding network followed by an autoregressive model on the encoded features. The discriminator is trained to maximize the probability of the true data and minimize the probability of the generated samples. The authors also propose a version that combines this autoregressive discriminator with a patchGAN discriminator. The authors train this model on cifar10 and stl10 and show reasonable generations and inception scores, comparing the latter with existing approaches. Pros: This discriminator architecture is well motivated, intuitive and novel. the samples are good (though not better than existing approaches as far as I can tell). The paper is also well written and easy to read. Cons: As is commonly the case with GAN models, it is difficult to assess the advantage of this approach over exiting techniques. The samples generated form this model look fine, but not better than existing samples. The inception scores are ok, but seem to be outperformed by other models (though this shouldn*t necessarily be taken as a critique of the approach presented here as inception scores are an approximation to what we care about and we should not be trying to tune models for better inception scores). Detailed comments: - In terms of experiments, I think think paper is missing the following: (1) An additional dataset -- cifar and stl10 are very similar, a face dataset for example would be good to see and is commonly used in GAN papers. (2) the authors claim their method is stable, so it would be good to see quantitative results backing this claim, i.e. sweeps over hyper-parameters / encoding/generator architectures with evaluations for different settings. - the idea of having some form of recurrent (either over channels of spatially) processing in the discriminator seems more general that the specific proposal given here. Could the authors say a bit more about what they think the effects of adding recurrence in the discriminator vs optimizing the likelihood of the features under the autoregressive model? Ultimately, the approach is interesting but there is not enough empirical evaluations.",
"role": "assistant"
}
]
| [
" This paper proposes a new GAN model whereby the discriminator (rather than being a binary classifier) consists of an encoding network followed by an autoregressive model on the encoded features.",
"The discriminator is trained to maximize the probability of the true data and minimize the probability of the generated samples.",
"The authors also propose a version that combines this autoregressive discriminator with a patchGAN discriminator.",
"The authors train this model on cifar10 and stl10 and show reasonable generations and inception scores, comparing the latter with existing approaches.",
"Pros: This discriminator architecture is well motivated, intuitive and novel.",
"the samples are good (though not better than existing approaches as far as I can tell).",
"The paper is also well written and easy to read.",
"Cons: As is commonly the case with GAN models, it is difficult to assess the advantage of this approach over exiting techniques.",
"The samples generated form this model look fine, but not better than existing samples.",
"The inception scores are ok, but seem to be outperformed by other models (though this shouldn*t necessarily be taken as a critique of the approach presented here as inception scores are an approximation to what we care about and we should not be trying to tune models for better inception scores).",
"Detailed comments: - In terms of experiments, I think think paper is missing the following: (1) An additional dataset -- cifar and stl10 are very similar, a face dataset for example would be good to see and is commonly used in GAN papers.",
"(2) the authors claim their method is stable, so it would be good to see quantitative results backing this claim, i.e.",
"sweeps over hyper-parameters / encoding/generator architectures with evaluations for different settings.",
"- the idea of having some form of recurrent (either over channels of spatially) processing in the discriminator seems more general that the specific proposal given here.",
"Could the authors say a bit more about what they think the effects of adding recurrence in the discriminator vs optimizing the likelihood of the features under the autoregressive model?",
"Ultimately, the approach is interesting but there is not enough empirical evaluations."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 14,
"praise": 5,
"presentation_and_reporting": 3,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 16
} | 0.3125 | 0 | 0.125 | 0.875 | 0.3125 | 0.1875 | 0.25 | 0.1875 | 16 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 2.107958 | 0.142042 |
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks | Deep neural networks have led to a series of breakthroughs, dramatically improving the state-of-the-art in many domains. The techniques driving these advances, however, lack a formal method to account for model uncertainty. While the Bayesian approach to learning provides a solid theoretical framework to handle uncertainty, inference in Bayesian-inspired deep neural networks is difficult. In this paper, we provide a practical approach to Bayesian learning that relies on a regularization technique found in nearly every modern network, batch normalization. We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models, and we demonstrate how this finding allows us to make useful estimates of the model uncertainty. Using our approach, it is possible to make meaningful uncertainty estimates using conventional architectures without modifying the network or the training procedure. Our approach is thoroughly validated in a series of empirical experiments on different tasks and using various measures, showing it to outperform baselines on a majority of datasets with strong statistical significance. | *Summary* The paper proposes using batch normalisation at test time to get the predictive uncertainty. The stochasticity of the prediction comes from different minibatches of training data that were used to normalise the activity/pre-activation values at each layer. This is justified by an argument that using batch norm is doing variational inference, so one should use the approximate posterior provided by batch norm at prediction time. Several experiments show Monte Carlo prediction at test time using batch norm is better than dropout. *Originality and significance* As far as I understand, almost learning algorithms similar to equation 2 can be recast as variational inference under equation 1. However, the critical questions are what is the corresponding prior, what is the approximating density, what are the additional approximations to obtain 2, and whether the approximation is a good approximation for getting closer to the posterior/obtain better prediction. It is not clear to me from the presentation what the q(w) density is -- whether this is explicit (as in vanilla Gaussian VI or MC dropout), or implicit (the stochasticity on the activity h due to batch norm induces an equivalence q on w). From a Bayesian perspective, it is also not satisfying to ignore the regularisation term by an empirical heuristic provided in the batch norm paper [small .lambda] -- what is the rationale of this? Can this be explained by comparing the variational free-energy. The experiments also do not compare to modern variational inference methods using the reparameterisation trick with Gaussian variational approximations (see Blundell et al 2016) or richer variational families (see e.g. Louizos and Welling, 2016, 2017). The VI method included in the PBP paper (Hernandez-Lobato and Adams, 2015) does not use the reparameterisation trick, which has been found to reduce variance and improve over Graves* VI method. *Clarity* The paper is in general well written and easy to understand. *Additional comments* Page 2: Monte Carlo Droput --> Dropout Page 3 related work: (Adams, 2015) should be (Hernandez-Lobato and Adams, 2015) | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBayesian Uncertainty Estimation for Batch Normalized Deep Networks\nDeep neural networks have led to a series of breakthroughs, dramatically improving the state-of-the-art in many domains. The techniques driving these advances, however, lack a formal method to account for model uncertainty. While the Bayesian approach to learning provides a solid theoretical framework to handle uncertainty, inference in Bayesian-inspired deep neural networks is difficult. In this paper, we provide a practical approach to Bayesian learning that relies on a regularization technique found in nearly every modern network, batch normalization. We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models, and we demonstrate how this finding allows us to make useful estimates of the model uncertainty. Using our approach, it is possible to make meaningful uncertainty estimates using conventional architectures without modifying the network or the training procedure. Our approach is thoroughly validated in a series of empirical experiments on different tasks and using various measures, showing it to outperform baselines on a majority of datasets with strong statistical significance.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n *Summary* The paper proposes using batch normalisation at test time to get the predictive uncertainty. The stochasticity of the prediction comes from different minibatches of training data that were used to normalise the activity/pre-activation values at each layer. This is justified by an argument that using batch norm is doing variational inference, so one should use the approximate posterior provided by batch norm at prediction time. Several experiments show Monte Carlo prediction at test time using batch norm is better than dropout. *Originality and significance* As far as I understand, almost learning algorithms similar to equation 2 can be recast as variational inference under equation 1. However, the critical questions are what is the corresponding prior, what is the approximating density, what are the additional approximations to obtain 2, and whether the approximation is a good approximation for getting closer to the posterior/obtain better prediction. It is not clear to me from the presentation what the q(w) density is -- whether this is explicit (as in vanilla Gaussian VI or MC dropout), or implicit (the stochasticity on the activity h due to batch norm induces an equivalence q on w). From a Bayesian perspective, it is also not satisfying to ignore the regularisation term by an empirical heuristic provided in the batch norm paper [small .lambda] -- what is the rationale of this? Can this be explained by comparing the variational free-energy. The experiments also do not compare to modern variational inference methods using the reparameterisation trick with Gaussian variational approximations (see Blundell et al 2016) or richer variational families (see e.g. Louizos and Welling, 2016, 2017). The VI method included in the PBP paper (Hernandez-Lobato and Adams, 2015) does not use the reparameterisation trick, which has been found to reduce variance and improve over Graves* VI method. *Clarity* The paper is in general well written and easy to understand. *Additional comments* Page 2: Monte Carlo Droput --> Dropout Page 3 related work: (Adams, 2015) should be (Hernandez-Lobato and Adams, 2015)",
"role": "assistant"
}
]
| [
" *Summary* The paper proposes using batch normalisation at test time to get the predictive uncertainty.",
"The stochasticity of the prediction comes from different minibatches of training data that were used to normalise the activity/pre-activation values at each layer.",
"This is justified by an argument that using batch norm is doing variational inference, so one should use the approximate posterior provided by batch norm at prediction time.",
"Several experiments show Monte Carlo prediction at test time using batch norm is better than dropout.",
"*Originality and significance* As far as I understand, almost learning algorithms similar to equation 2 can be recast as variational inference under equation 1.",
"However, the critical questions are what is the corresponding prior, what is the approximating density, what are the additional approximations to obtain 2, and whether the approximation is a good approximation for getting closer to the posterior/obtain better prediction.",
"It is not clear to me from the presentation what the q(w) density is -- whether this is explicit (as in vanilla Gaussian VI or MC dropout), or implicit (the stochasticity on the activity h due to batch norm induces an equivalence q on w).",
"From a Bayesian perspective, it is also not satisfying to ignore the regularisation term by an empirical heuristic provided in the batch norm paper [small .lambda] -- what is the rationale of this?",
"Can this be explained by comparing the variational free-energy.",
"The experiments also do not compare to modern variational inference methods using the reparameterisation trick with Gaussian variational approximations (see Blundell et al 2016) or richer variational families (see e.g.",
"Louizos and Welling, 2016, 2017).",
"The VI method included in the PBP paper (Hernandez-Lobato and Adams, 2015) does not use the reparameterisation trick, which has been found to reduce variance and improve over Graves* VI method.",
"*Clarity* The paper is in general well written and easy to understand.",
"*Additional comments* Page 2: Monte Carlo Droput --> Dropout Page 3 related work: (Adams, 2015) should be (Hernandez-Lobato and Adams, 2015)"
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 11,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 7,
"suggestion_and_solution": 5,
"total": 14
} | 0.285714 | 0.071429 | 0 | 0.785714 | 0.071429 | 0.142857 | 0.5 | 0.357143 | 14 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.214286 | 2.198503 | 0.015782 |
Spontaneous Symmetry Breaking in Deep Neural Networks | We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce the redundancies of the weights. Using a two parameter field theory, we find that the network can break such symmetries itself towards the end of training in a process commonly known in physics as spontaneous symmetry breaking. This corresponds to a network generalizing itself without any user input layers to break the symmetry, but by communication with adjacent layers. In the layer decoupling limit applicable to residual networks (He et al., 2015), we show that the remnant symmetries that survive the non-linear layers are spontaneously broken based on empirical results. The Lagrangian for the non-linear and weight layers together has striking similarities with the one in quantum field theory of a scalar. Using results from quantum field theory we show that our framework is able to explain many experimentally observed phenomena, such as training on random labels with zero error (Zhang et al., 2017), the information bottleneck and the phase transition out of it (Shwartz-Ziv & Tishby, 2017), shattered gradients (Balduzzi et al., 2017), and many more. | The paper promises quite a few intriguing connections between information bottleneck, phase transitions and deep learning. While we think that this is a worthwhile bridge to build between machine learning and statistical field theory, the exposition of the paper leaves much to be desired. Had it been a clean straightforward application of QFT, as well-trained theoretical physicists, we would have been able to evaluate the paper. Generally, it would help the reader to have an overall map and indication of the steps that would be taken formally. Specifically, starting from Section 2.3, especially around the transition to continuous layers, very little information is provided how one is dealing with the cost function and the results are derived. Section 2.3 would benefit from expanded discussion with examples and detailed explanations. Minor: The following sentence in third paragraph of the Introduction is incomplete: Because the ResNet does not contain such symmetry breaking layers in the architecture. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSpontaneous Symmetry Breaking in Deep Neural Networks\nWe propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce the redundancies of the weights. Using a two parameter field theory, we find that the network can break such symmetries itself towards the end of training in a process commonly known in physics as spontaneous symmetry breaking. This corresponds to a network generalizing itself without any user input layers to break the symmetry, but by communication with adjacent layers. In the layer decoupling limit applicable to residual networks (He et al., 2015), we show that the remnant symmetries that survive the non-linear layers are spontaneously broken based on empirical results. The Lagrangian for the non-linear and weight layers together has striking similarities with the one in quantum field theory of a scalar. Using results from quantum field theory we show that our framework is able to explain many experimentally observed phenomena, such as training on random labels with zero error (Zhang et al., 2017), the information bottleneck and the phase transition out of it (Shwartz-Ziv & Tishby, 2017), shattered gradients (Balduzzi et al., 2017), and many more.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper promises quite a few intriguing connections between information bottleneck, phase transitions and deep learning. While we think that this is a worthwhile bridge to build between machine learning and statistical field theory, the exposition of the paper leaves much to be desired. Had it been a clean straightforward application of QFT, as well-trained theoretical physicists, we would have been able to evaluate the paper. Generally, it would help the reader to have an overall map and indication of the steps that would be taken formally. Specifically, starting from Section 2.3, especially around the transition to continuous layers, very little information is provided how one is dealing with the cost function and the results are derived. Section 2.3 would benefit from expanded discussion with examples and detailed explanations. Minor: The following sentence in third paragraph of the Introduction is incomplete: Because the ResNet does not contain such symmetry breaking layers in the architecture.",
"role": "assistant"
}
]
| [
" The paper promises quite a few intriguing connections between information bottleneck, phase transitions and deep learning.",
"While we think that this is a worthwhile bridge to build between machine learning and statistical field theory, the exposition of the paper leaves much to be desired.",
"Had it been a clean straightforward application of QFT, as well-trained theoretical physicists, we would have been able to evaluate the paper.",
"Generally, it would help the reader to have an overall map and indication of the steps that would be taken formally.",
"Specifically, starting from Section 2.3, especially around the transition to continuous layers, very little information is provided how one is dealing with the cost function and the results are derived.",
"Section 2.3 would benefit from expanded discussion with examples and detailed explanations.",
"Minor: The following sentence in third paragraph of the Introduction is incomplete: Because the ResNet does not contain such symmetry breaking layers in the architecture."
] | {
"criticism": 2,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 7
} | 0.285714 | 0.285714 | 0.142857 | 0.285714 | 0.428571 | 0.428571 | 0.285714 | 0.428571 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.571429 | 2.003261 | 0.568167 |
Forward Modeling for Partial Observation Strategy Games - A StarCraft Defogger | This paper we present a defogger, a model that learns to predict future hidden information from partial observations. We formulate this model in the context of forward modeling and leverage spatial and sequential constraints and correlations via convolutional neural networks and long short-term memory networks, respectively. We evaluate our approach on a large dataset of human games of StarCraft: Brood War, a real-time strategy video game. Our models consistently beat strong rule-based baselines and qualitatively produce sensible future game states. | The paper considers a problem of predicting hidden information in a poMDP with an application to Starcraft. Authors propose a number of baseline models as well as metrics to assess the quality of “defogging”. I find the problem of defogging quite interesting, even though it is a bit too Starcraft-specific some findings could perhaps be translated to other partially observed environments. Authors use the dataset provided for Starcraft: Brood war by Lin et al, 2017. My impression about the paper is that even though it touches a very interesting problem, it neither is written well nor it contains much of a novelty in terms of algorithms, methods or network architectures. Detailed comments: * Authors should at very least cite (Vinyals et al, 2017) and explain why the environment and the dataset released for Starcraft 2 is less suited than the one provided by Lin et al. * Problem statement in section 3.1 should certainly be improved. Authors introduce rather heavy notation which is then used in a confusing way. For example, what is the top index in supposed to mean? The notation is not much used after sec. 3.1, for example, figure 1 does not use it. * A related issue, is that the definition of metrics is very informal and, again, does not use the already defined notation. Including explicit formulas would be very helpful, because, for example, it looks like when reported in table 1 the metrics are spatially averaged, yet I could not find an explicit notion of that. * Authors seem to only consider deterministic defogging models. However, to me it seems that even in 15 game steps the uncertainty over the hidden state is quite high and thus any deterministic model has a very limited potential in prediction it. At least the concept of stochastic predictions should be discussed * The rule-based baselines are not described in detail. What does “using game rules to infer the existence of unit types” mean? * Another detail which I found missing is whether authors use just a screen, a mini-map or both. In the game of Starcraft, only screen contains information about unit-types, but it’s field of view is limited. Hence, it’s unclear to me whether a model should infer hidden information based on just a single screen + minimap observation (or a history of them) or due to how the dataset is constructed, all units are observed without spatial limitations of the screen. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nForward Modeling for Partial Observation Strategy Games - A StarCraft Defogger\nThis paper we present a defogger, a model that learns to predict future hidden information from partial observations. We formulate this model in the context of forward modeling and leverage spatial and sequential constraints and correlations via convolutional neural networks and long short-term memory networks, respectively. We evaluate our approach on a large dataset of human games of StarCraft: Brood War, a real-time strategy video game. Our models consistently beat strong rule-based baselines and qualitatively produce sensible future game states.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper considers a problem of predicting hidden information in a poMDP with an application to Starcraft. Authors propose a number of baseline models as well as metrics to assess the quality of “defogging”. I find the problem of defogging quite interesting, even though it is a bit too Starcraft-specific some findings could perhaps be translated to other partially observed environments. Authors use the dataset provided for Starcraft: Brood war by Lin et al, 2017. My impression about the paper is that even though it touches a very interesting problem, it neither is written well nor it contains much of a novelty in terms of algorithms, methods or network architectures. Detailed comments: * Authors should at very least cite (Vinyals et al, 2017) and explain why the environment and the dataset released for Starcraft 2 is less suited than the one provided by Lin et al. * Problem statement in section 3.1 should certainly be improved. Authors introduce rather heavy notation which is then used in a confusing way. For example, what is the top index in supposed to mean? The notation is not much used after sec. 3.1, for example, figure 1 does not use it. * A related issue, is that the definition of metrics is very informal and, again, does not use the already defined notation. Including explicit formulas would be very helpful, because, for example, it looks like when reported in table 1 the metrics are spatially averaged, yet I could not find an explicit notion of that. * Authors seem to only consider deterministic defogging models. However, to me it seems that even in 15 game steps the uncertainty over the hidden state is quite high and thus any deterministic model has a very limited potential in prediction it. At least the concept of stochastic predictions should be discussed * The rule-based baselines are not described in detail. What does “using game rules to infer the existence of unit types” mean? * Another detail which I found missing is whether authors use just a screen, a mini-map or both. In the game of Starcraft, only screen contains information about unit-types, but it’s field of view is limited. Hence, it’s unclear to me whether a model should infer hidden information based on just a single screen + minimap observation (or a history of them) or due to how the dataset is constructed, all units are observed without spatial limitations of the screen.",
"role": "assistant"
}
]
| [
" The paper considers a problem of predicting hidden information in a poMDP with an application to Starcraft.",
"Authors propose a number of baseline models as well as metrics to assess the quality of “defogging”.",
"I find the problem of defogging quite interesting, even though it is a bit too Starcraft-specific some findings could perhaps be translated to other partially observed environments.",
"Authors use the dataset provided for Starcraft: Brood war by Lin et al, 2017.",
"My impression about the paper is that even though it touches a very interesting problem, it neither is written well nor it contains much of a novelty in terms of algorithms, methods or network architectures.",
"Detailed comments: * Authors should at very least cite (Vinyals et al, 2017) and explain why the environment and the dataset released for Starcraft 2 is less suited than the one provided by Lin et al.",
"* Problem statement in section 3.1 should certainly be improved.",
"Authors introduce rather heavy notation which is then used in a confusing way.",
"For example, what is the top index in supposed to mean?",
"The notation is not much used after sec.",
"3.1, for example, figure 1 does not use it.",
"* A related issue, is that the definition of metrics is very informal and, again, does not use the already defined notation.",
"Including explicit formulas would be very helpful, because, for example, it looks like when reported in table 1 the metrics are spatially averaged, yet I could not find an explicit notion of that.",
"* Authors seem to only consider deterministic defogging models.",
"However, to me it seems that even in 15 game steps the uncertainty over the hidden state is quite high and thus any deterministic model has a very limited potential in prediction it.",
"At least the concept of stochastic predictions should be discussed * The rule-based baselines are not described in detail.",
"What does “using game rules to infer the existence of unit types” mean?",
"* Another detail which I found missing is whether authors use just a screen, a mini-map or both.",
"In the game of Starcraft, only screen contains information about unit-types, but it’s field of view is limited.",
"Hence, it’s unclear to me whether a model should infer hidden information based on just a single screen + minimap observation (or a history of them) or due to how the dataset is constructed, all units are observed without spatial limitations of the screen."
] | {
"criticism": 10,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 2,
"presentation_and_reporting": 12,
"results_and_discussion": 4,
"suggestion_and_solution": 5,
"total": 20
} | 0.5 | 0.05 | 0.1 | 0.45 | 0.1 | 0.6 | 0.2 | 0.25 | 20 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 1.476661 | 0.773339 |
Sensor Transformation Attention Networks | Recent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attentional mechanisms into neural networks increases the performance of the system substantially. We report on a new modular network architecture that applies an attentional mechanism not on temporal and spatial regions of the input, but on sensor selection for multi-sensor setups. This network called the sensor transformation attention network (STAN) is evaluated in scenarios which include the presence of natural noise or synthetic dynamic noise. We demonstrate how the attentional signal responds dynamically to changing noise levels and sensor-specific noise, leading to reduced word error rates (WERs) on both audio and visual tasks using TIDIGITS and GRID; and also on CHiME-3, a multi-microphone real-world noisy dataset. The improvement grows as more channels are corrupted as demonstrated on the CHiME-3 dataset. Moreover, the proposed STAN architecture naturally introduces a number of advantages including ease of removing sensors from existing architectures, attentional interpretability, and increased robustness to a variety of noise environments. | This paper proposes sensor transformation attention network (STAN), which dynamically select appropriate sequential sensor inputs based on an attention mechanism. Pros: One of the main focuses of this paper is to apply this method to a real task, multichannel speech recognition based on CHiME-3, by providing its reasonable sensor selection function in real data especially to avoid audio data corruptions. This analysis is quite intuitive, and also shows the effectiveness of the proposed method in this practical setup. Cons: The idea seems to be simple and does not have significant originality. Also, the paper does not clearly mention the attention mechanism part, and needs some improvement. Comments: - The paper mainly focuses on the soft sensor selection. However, in an array signal processing context (and its application to multichannel speech recognition), it would be better to mention beamforming techniques, where the compensation of the delays of sensors is quite important. - In addition, there is a related study of using multichannel speech recognition based on sequence-to-sequence modeling and attention mechanism by Ochiai et al, "A Unified Architecture for Multichannel End-to-End Speech Recognition with Neural Beamforming," IEEE Journal of Selected Topics in Signal Processing. This paper uses the same CHiME-3 database, and also showing a similar analysis of channel selection. It’s better to discuss about this paper as well as a reference. - Section 2: better to explain about how to obtain attention scores z in more details. - Figure 3, experiments of Double audio/video clean conditions: I cannot understand why they are improved from single audio/video clean conditions. Need some explanations. - Section 3.1: 39-dimensional Mel-frequency cepstral coefficients (MFCCs) -> 13 -dimensional Mel-frequency cepstral coefficients (MFCCs) with 1st and 2nd order delta features. - Section 3.2 Dataset “As for TIDIGIT”: “As for GRID”(?) - Section 4 Models “The parameters of the attention modules are either shared across sensors (STAN-shared) or not shared across sensors (STAN- default).”: It’s better to explain this part in more details, possibly with some equations. It is hard to understand the difference. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSensor Transformation Attention Networks\nRecent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attentional mechanisms into neural networks increases the performance of the system substantially. We report on a new modular network architecture that applies an attentional mechanism not on temporal and spatial regions of the input, but on sensor selection for multi-sensor setups. This network called the sensor transformation attention network (STAN) is evaluated in scenarios which include the presence of natural noise or synthetic dynamic noise. We demonstrate how the attentional signal responds dynamically to changing noise levels and sensor-specific noise, leading to reduced word error rates (WERs) on both audio and visual tasks using TIDIGITS and GRID; and also on CHiME-3, a multi-microphone real-world noisy dataset. The improvement grows as more channels are corrupted as demonstrated on the CHiME-3 dataset. Moreover, the proposed STAN architecture naturally introduces a number of advantages including ease of removing sensors from existing architectures, attentional interpretability, and increased robustness to a variety of noise environments.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes sensor transformation attention network (STAN), which dynamically select appropriate sequential sensor inputs based on an attention mechanism. Pros: One of the main focuses of this paper is to apply this method to a real task, multichannel speech recognition based on CHiME-3, by providing its reasonable sensor selection function in real data especially to avoid audio data corruptions. This analysis is quite intuitive, and also shows the effectiveness of the proposed method in this practical setup. Cons: The idea seems to be simple and does not have significant originality. Also, the paper does not clearly mention the attention mechanism part, and needs some improvement. Comments: - The paper mainly focuses on the soft sensor selection. However, in an array signal processing context (and its application to multichannel speech recognition), it would be better to mention beamforming techniques, where the compensation of the delays of sensors is quite important. - In addition, there is a related study of using multichannel speech recognition based on sequence-to-sequence modeling and attention mechanism by Ochiai et al, \"A Unified Architecture for Multichannel End-to-End Speech Recognition with Neural Beamforming,\" IEEE Journal of Selected Topics in Signal Processing. This paper uses the same CHiME-3 database, and also showing a similar analysis of channel selection. It’s better to discuss about this paper as well as a reference. - Section 2: better to explain about how to obtain attention scores z in more details. - Figure 3, experiments of Double audio/video clean conditions: I cannot understand why they are improved from single audio/video clean conditions. Need some explanations. - Section 3.1: 39-dimensional Mel-frequency cepstral coefficients (MFCCs) -> 13 -dimensional Mel-frequency cepstral coefficients (MFCCs) with 1st and 2nd order delta features. - Section 3.2 Dataset “As for TIDIGIT”: “As for GRID”(?) - Section 4 Models “The parameters of the attention modules are either shared across sensors (STAN-shared) or not shared across sensors (STAN- default).”: It’s better to explain this part in more details, possibly with some equations. It is hard to understand the difference.",
"role": "assistant"
}
]
| [
" This paper proposes sensor transformation attention network (STAN), which dynamically select appropriate sequential sensor inputs based on an attention mechanism.",
"Pros: One of the main focuses of this paper is to apply this method to a real task, multichannel speech recognition based on CHiME-3, by providing its reasonable sensor selection function in real data especially to avoid audio data corruptions.",
"This analysis is quite intuitive, and also shows the effectiveness of the proposed method in this practical setup.",
"Cons: The idea seems to be simple and does not have significant originality.",
"Also, the paper does not clearly mention the attention mechanism part, and needs some improvement.",
"Comments: - The paper mainly focuses on the soft sensor selection.",
"However, in an array signal processing context (and its application to multichannel speech recognition), it would be better to mention beamforming techniques, where the compensation of the delays of sensors is quite important.",
"- In addition, there is a related study of using multichannel speech recognition based on sequence-to-sequence modeling and attention mechanism by Ochiai et al, \"A Unified Architecture for Multichannel End-to-End Speech Recognition with Neural Beamforming,\" IEEE Journal of Selected Topics in Signal Processing.",
"This paper uses the same CHiME-3 database, and also showing a similar analysis of channel selection.",
"It’s better to discuss about this paper as well as a reference.",
"- Section 2: better to explain about how to obtain attention scores z in more details.",
"- Figure 3, experiments of Double audio/video clean conditions: I cannot understand why they are improved from single audio/video clean conditions.",
"Need some explanations.",
"- Section 3.1: 39-dimensional Mel-frequency cepstral coefficients (MFCCs) -> 13 -dimensional Mel-frequency cepstral coefficients (MFCCs) with 1st and 2nd order delta features.",
"- Section 3.2 Dataset “As for TIDIGIT”: “As for GRID”(?)",
"- Section 4 Models “The parameters of the attention modules are either shared across sensors (STAN-shared) or not shared across sensors (STAN- default).”: It’s better to explain this part in more details, possibly with some equations.",
"It is hard to understand the difference."
] | {
"criticism": 4,
"example": 4,
"importance_and_relevance": 3,
"materials_and_methods": 10,
"praise": 1,
"presentation_and_reporting": 8,
"results_and_discussion": 2,
"suggestion_and_solution": 6,
"total": 17
} | 0.235294 | 0.235294 | 0.176471 | 0.588235 | 0.058824 | 0.470588 | 0.117647 | 0.352941 | 17 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.235294 | 1.982775 | 0.252519 |
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations | Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder*s output. We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood (reconstruction quality). | This paper describes DIP-VAE, an improvement on the beta-VAE framework for learning a disentangled representation of the data generative factors in the visual domain. The authors propose to augment the standard VAE ELBO objective with an extra term that minimises the covariance between the latents. Unlike the original beta-VAE objective which implicitly minimises such covariance individually for each observation x, the DIP-VAE objective does so while marginalising over x. This difference removes the tradeoff between reconstruction quality and disentangling reported for beta-VAE, since DIP-VAE maintains sensitivity of q(z|x) to each observation x, and hence achieves disentangling while preserving the sharpness of reconstructions. Pros: - the paper is well written - it makes a contribution to an important line or research (unsupervised disentangled representation learning) - the covariance minimisation proposed in the paper looks like an easy to implement yet impactful change to the VAE objective to encourage disentanglement while preserving reconstruction quality - it directly compares the performance of DIP-VAE to that of beta-VAE showing significant improvements in terms of disentangling metric and reconstruction error Cons: - I am yet to be fully convinced how well the approach works. Table 1 and Figure 1 look good, but other figures are either tangental to the main point of the paper, or impossible to read due to the small scale. For example, the qualitative evaluation of the latent traversals is almost impossible due to the tiny scale of Table 5 (shouldn*t this be a Figure rather than a Table?) - The authors concentrate a lot on the CelebA dataset, however I believe the comparison with beta-VAE would be a lot clearer on the dSprites dataset (https://github.com/deepmind/dsprites-dataset) (the authors call it 2D shapes). I would like to see latent traversals of the best DIP-VAE vs beta-VAE to demonstrate good disentangling and the improvements in reconstruction quality. This (and larger Table 5) would be better use of space compared to Table 2 and Figure 2 for example, which I feel are somewhat tangental to the main message of the paper and are better suited for the appendix. - I wonder how the authors calculated the disentanglement metric on CelebA, given that the ground truth attributes in the dataset are often rather qualitative (e.g. attractiveness), noisy (many can be considered an inaccurate description of the image), and often do not align with the data generative factors discoverable through unsupervised modeling of the data distribution - Table 3 - the legends for the axes are too small and are impossible to read. Also it would be helpful to normalise the scales of the heat plots in the second row. - Table 3 - looking at the correlations with the ground truth factors, it seems like beta-VAE did not actually disentangle the latents. Would be nice to see the corresponding latent traversal plots to ensure that the baseline is actually trained well. I am willing to increase my score for the paper if the authors can address my points. In particular I would like to see a clear comparison in terms of latent traversals on dSprites between beta-VAE and DIP-VAE models presented in Table 3. I would also like to see where these particular models lie in Figure 1. --------------------------- ---- UPDATE ---------- --------------------------- I have increased my score after reading the revised version of the manuscript. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVariational Inference of Disentangled Latent Concepts from Unlabeled Observations\nDisentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder*s output. We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood (reconstruction quality).",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper describes DIP-VAE, an improvement on the beta-VAE framework for learning a disentangled representation of the data generative factors in the visual domain. The authors propose to augment the standard VAE ELBO objective with an extra term that minimises the covariance between the latents. Unlike the original beta-VAE objective which implicitly minimises such covariance individually for each observation x, the DIP-VAE objective does so while marginalising over x. This difference removes the tradeoff between reconstruction quality and disentangling reported for beta-VAE, since DIP-VAE maintains sensitivity of q(z|x) to each observation x, and hence achieves disentangling while preserving the sharpness of reconstructions. Pros: - the paper is well written - it makes a contribution to an important line or research (unsupervised disentangled representation learning) - the covariance minimisation proposed in the paper looks like an easy to implement yet impactful change to the VAE objective to encourage disentanglement while preserving reconstruction quality - it directly compares the performance of DIP-VAE to that of beta-VAE showing significant improvements in terms of disentangling metric and reconstruction error Cons: - I am yet to be fully convinced how well the approach works. Table 1 and Figure 1 look good, but other figures are either tangental to the main point of the paper, or impossible to read due to the small scale. For example, the qualitative evaluation of the latent traversals is almost impossible due to the tiny scale of Table 5 (shouldn*t this be a Figure rather than a Table?) - The authors concentrate a lot on the CelebA dataset, however I believe the comparison with beta-VAE would be a lot clearer on the dSprites dataset (https://github.com/deepmind/dsprites-dataset) (the authors call it 2D shapes). I would like to see latent traversals of the best DIP-VAE vs beta-VAE to demonstrate good disentangling and the improvements in reconstruction quality. This (and larger Table 5) would be better use of space compared to Table 2 and Figure 2 for example, which I feel are somewhat tangental to the main message of the paper and are better suited for the appendix. - I wonder how the authors calculated the disentanglement metric on CelebA, given that the ground truth attributes in the dataset are often rather qualitative (e.g. attractiveness), noisy (many can be considered an inaccurate description of the image), and often do not align with the data generative factors discoverable through unsupervised modeling of the data distribution - Table 3 - the legends for the axes are too small and are impossible to read. Also it would be helpful to normalise the scales of the heat plots in the second row. - Table 3 - looking at the correlations with the ground truth factors, it seems like beta-VAE did not actually disentangle the latents. Would be nice to see the corresponding latent traversal plots to ensure that the baseline is actually trained well. I am willing to increase my score for the paper if the authors can address my points. In particular I would like to see a clear comparison in terms of latent traversals on dSprites between beta-VAE and DIP-VAE models presented in Table 3. I would also like to see where these particular models lie in Figure 1. --------------------------- ---- UPDATE ---------- --------------------------- I have increased my score after reading the revised version of the manuscript.",
"role": "assistant"
}
]
| [
" This paper describes DIP-VAE, an improvement on the beta-VAE framework for learning a disentangled representation of the data generative factors in the visual domain.",
"The authors propose to augment the standard VAE ELBO objective with an extra term that minimises the covariance between the latents.",
"Unlike the original beta-VAE objective which implicitly minimises such covariance individually for each observation x, the DIP-VAE objective does so while marginalising over x.",
"This difference removes the tradeoff between reconstruction quality and disentangling reported for beta-VAE, since DIP-VAE maintains sensitivity of q(z|x) to each observation x, and hence achieves disentangling while preserving the sharpness of reconstructions.",
"Pros: - the paper is well written - it makes a contribution to an important line or research (unsupervised disentangled representation learning) - the covariance minimisation proposed in the paper looks like an easy to implement yet impactful change to the VAE objective to encourage disentanglement while preserving reconstruction quality - it directly compares the performance of DIP-VAE to that of beta-VAE showing significant improvements in terms of disentangling metric and reconstruction error Cons: - I am yet to be fully convinced how well the approach works.",
"Table 1 and Figure 1 look good, but other figures are either tangental to the main point of the paper, or impossible to read due to the small scale.",
"For example, the qualitative evaluation of the latent traversals is almost impossible due to the tiny scale of Table 5 (shouldn*t this be a Figure rather than a Table?)",
"- The authors concentrate a lot on the CelebA dataset, however I believe the comparison with beta-VAE would be a lot clearer on the dSprites dataset (https://github.com/deepmind/dsprites-dataset) (the authors call it 2D shapes).",
"I would like to see latent traversals of the best DIP-VAE vs beta-VAE to demonstrate good disentangling and the improvements in reconstruction quality.",
"This (and larger Table 5) would be better use of space compared to Table 2 and Figure 2 for example, which I feel are somewhat tangental to the main message of the paper and are better suited for the appendix.",
"- I wonder how the authors calculated the disentanglement metric on CelebA, given that the ground truth attributes in the dataset are often rather qualitative (e.g.",
"attractiveness), noisy (many can be considered an inaccurate description of the image), and often do not align with the data generative factors discoverable through unsupervised modeling of the data distribution - Table 3 - the legends for the axes are too small and are impossible to read.",
"Also it would be helpful to normalise the scales of the heat plots in the second row.",
"- Table 3 - looking at the correlations with the ground truth factors, it seems like beta-VAE did not actually disentangle the latents.",
"Would be nice to see the corresponding latent traversal plots to ensure that the baseline is actually trained well.",
"I am willing to increase my score for the paper if the authors can address my points.",
"In particular I would like to see a clear comparison in terms of latent traversals on dSprites between beta-VAE and DIP-VAE models presented in Table 3.",
"I would also like to see where these particular models lie in Figure 1.",
"--------------------------- ---- UPDATE ---------- --------------------------- I have increased my score after reading the revised version of the manuscript."
] | {
"criticism": 5,
"example": 5,
"importance_and_relevance": 1,
"materials_and_methods": 15,
"praise": 3,
"presentation_and_reporting": 6,
"results_and_discussion": 5,
"suggestion_and_solution": 7,
"total": 19
} | 0.263158 | 0.263158 | 0.052632 | 0.789474 | 0.157895 | 0.315789 | 0.263158 | 0.368421 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.473684 | 1.905517 | 0.568167 |
The Mutual Autoencoder: Controlling Information in Latent Code Representations | Variational autoencoders (VAE) learn probabilistic latent variable models by optimizing a bound on the marginal likelihood of the observed data. Beyond providing a good density model a VAE model assigns to each data instance a latent code. In many applications, this latent code provides a useful high-level summary of the observation. However, the VAE may fail to learn a useful representation when the decoder family is very expressive. This is because maximum likelihood does not explicitly encourage useful representations and the latent variable is used only if it helps model the marginal distribution. This makes representation learning with VAEs unreliable. To address this issue, we propose a method for explicitly controlling the amount of information stored in the latent code. Our method can learn codes ranging from independent to nearly deterministic while benefiting from decoder capacity. Thus, we decouple the choice of decoder capacity and the latent code dimensionality from the amount of information stored in the code. | The authors propose a variational autoencoder constrained in such a way that the mutual information between the observed variables and their latent representation is constant and user specified. To do so, they leverage the penalty function method as a relaxation of the original problem, and a variational bound (infomax) to approximate the mutual information term in their objective. I really enjoyed reading the paper, the proposed approach is well motivated and clearly described. However, the experiments section is very weak. Although I like the illustrative toy problem, in that it clearly highlights how the method works, the experiment on real data is not very convincing. Further, the authors do not consider a more rigorous benchmark including additional datasets and state-of-the-art modelling approaches for text. - {.cal Z} in (1) not defined, same for .Theta. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Mutual Autoencoder: Controlling Information in Latent Code Representations\nVariational autoencoders (VAE) learn probabilistic latent variable models by optimizing a bound on the marginal likelihood of the observed data. Beyond providing a good density model a VAE model assigns to each data instance a latent code. In many applications, this latent code provides a useful high-level summary of the observation. However, the VAE may fail to learn a useful representation when the decoder family is very expressive. This is because maximum likelihood does not explicitly encourage useful representations and the latent variable is used only if it helps model the marginal distribution. This makes representation learning with VAEs unreliable. To address this issue, we propose a method for explicitly controlling the amount of information stored in the latent code. Our method can learn codes ranging from independent to nearly deterministic while benefiting from decoder capacity. Thus, we decouple the choice of decoder capacity and the latent code dimensionality from the amount of information stored in the code.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors propose a variational autoencoder constrained in such a way that the mutual information between the observed variables and their latent representation is constant and user specified. To do so, they leverage the penalty function method as a relaxation of the original problem, and a variational bound (infomax) to approximate the mutual information term in their objective. I really enjoyed reading the paper, the proposed approach is well motivated and clearly described. However, the experiments section is very weak. Although I like the illustrative toy problem, in that it clearly highlights how the method works, the experiment on real data is not very convincing. Further, the authors do not consider a more rigorous benchmark including additional datasets and state-of-the-art modelling approaches for text. - {.cal Z} in (1) not defined, same for .Theta.",
"role": "assistant"
}
]
| [
" The authors propose a variational autoencoder constrained in such a way that the mutual information between the observed variables and their latent representation is constant and user specified.",
"To do so, they leverage the penalty function method as a relaxation of the original problem, and a variational bound (infomax) to approximate the mutual information term in their objective.",
"I really enjoyed reading the paper, the proposed approach is well motivated and clearly described.",
"However, the experiments section is very weak.",
"Although I like the illustrative toy problem, in that it clearly highlights how the method works, the experiment on real data is not very convincing.",
"Further, the authors do not consider a more rigorous benchmark including additional datasets and state-of-the-art modelling approaches for text.",
"- {.cal Z} in (1) not defined, same for .Theta."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 7
} | 0.428571 | 0 | 0.285714 | 0.857143 | 0.142857 | 0.428571 | 0.142857 | 0 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.285714 | 1.717547 | 0.568167 |
Label Embedding Network: Learning Label Representation for Soft Training of Deep Networks | We propose a method, called Label Embedding Network, which can learn label representation (label embedding) during the training process of deep networks. With the proposed method, the label embedding is adaptively and automatically learned through back propagation. The original one-hot represented loss function is converted into a new loss function with soft distributions, such that the originally unrelated labels have continuous interactions with each other during the training process. As a result, the trained model can achieve substantially higher accuracy and with faster convergence speed. Experimental results based on competitive tasks demonstrate the effectiveness of the proposed method, and the learned label embedding is reasonable and interpretable. The proposed method achieves comparable or even better results than the state-of-the-art systems. | The paper proposes to add an embedding layer for labels that constrains normal classifiers in order to find label representations that are semantically consistent. The approach is then experimented on various image and text tasks. The description of the model is laborious and hard to follow. Figure 1 helps but is only referred to at the end of the description (at the end of section 2.1), which instead explains each step without the big picture and loses the reader with confusing notation. For instance, it only became clear at the end of the section that E was learned. One of the motivations behing the model is to force label representations to be in a semantic space (where two labels with similar meanings would be nearby). The assumption given in the introduction is that softmax would not yield such a representation, but nowhere in the paper this assumption is verified. I believe that using cross-entropy with softmax should also push semantically similar labels to be nearby in the weight space entering the softmax. This should at least be verified and compared appropriately. Another motivation of the paper is that targets are given as 1s or 0s while soft targets should work better. I believe this is true, but there is a lot of prior work on these, such as adding a temperature to the softmax, or using distillation, etc. None of these are discussed appropriately in the paper. Section 2.2 describes a way to compress the label embedding representation, but it is not clear if this is actually used in the experiments. h is never discussed after section 2.2. Experiments on known datasets are interesting, but none of the results are competitive with current state-of-the-art results (SOTA), despite what is said in Appending D. For instance, one can find SOTA results for CIFAR100 around 16% and for CIFAR10 around 3%. Similarly, one can find SOTA results for IWSLT2015 around 28 BLEU. It can be fine to not be SOTA as long as it is acknowledged and discussed appropriately. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLabel Embedding Network: Learning Label Representation for Soft Training of Deep Networks\nWe propose a method, called Label Embedding Network, which can learn label representation (label embedding) during the training process of deep networks. With the proposed method, the label embedding is adaptively and automatically learned through back propagation. The original one-hot represented loss function is converted into a new loss function with soft distributions, such that the originally unrelated labels have continuous interactions with each other during the training process. As a result, the trained model can achieve substantially higher accuracy and with faster convergence speed. Experimental results based on competitive tasks demonstrate the effectiveness of the proposed method, and the learned label embedding is reasonable and interpretable. The proposed method achieves comparable or even better results than the state-of-the-art systems.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes to add an embedding layer for labels that constrains normal classifiers in order to find label representations that are semantically consistent. The approach is then experimented on various image and text tasks. The description of the model is laborious and hard to follow. Figure 1 helps but is only referred to at the end of the description (at the end of section 2.1), which instead explains each step without the big picture and loses the reader with confusing notation. For instance, it only became clear at the end of the section that E was learned. One of the motivations behing the model is to force label representations to be in a semantic space (where two labels with similar meanings would be nearby). The assumption given in the introduction is that softmax would not yield such a representation, but nowhere in the paper this assumption is verified. I believe that using cross-entropy with softmax should also push semantically similar labels to be nearby in the weight space entering the softmax. This should at least be verified and compared appropriately. Another motivation of the paper is that targets are given as 1s or 0s while soft targets should work better. I believe this is true, but there is a lot of prior work on these, such as adding a temperature to the softmax, or using distillation, etc. None of these are discussed appropriately in the paper. Section 2.2 describes a way to compress the label embedding representation, but it is not clear if this is actually used in the experiments. h is never discussed after section 2.2. Experiments on known datasets are interesting, but none of the results are competitive with current state-of-the-art results (SOTA), despite what is said in Appending D. For instance, one can find SOTA results for CIFAR100 around 16% and for CIFAR10 around 3%. Similarly, one can find SOTA results for IWSLT2015 around 28 BLEU. It can be fine to not be SOTA as long as it is acknowledged and discussed appropriately.",
"role": "assistant"
}
]
| [
" The paper proposes to add an embedding layer for labels that constrains normal classifiers in order to find label representations that are semantically consistent.",
"The approach is then experimented on various image and text tasks.",
"The description of the model is laborious and hard to follow.",
"Figure 1 helps but is only referred to at the end of the description (at the end of section 2.1), which instead explains each step without the big picture and loses the reader with confusing notation.",
"For instance, it only became clear at the end of the section that E was learned.",
"One of the motivations behing the model is to force label representations to be in a semantic space (where two labels with similar meanings would be nearby).",
"The assumption given in the introduction is that softmax would not yield such a representation, but nowhere in the paper this assumption is verified.",
"I believe that using cross-entropy with softmax should also push semantically similar labels to be nearby in the weight space entering the softmax.",
"This should at least be verified and compared appropriately.",
"Another motivation of the paper is that targets are given as 1s or 0s while soft targets should work better.",
"I believe this is true, but there is a lot of prior work on these, such as adding a temperature to the softmax, or using distillation, etc.",
"None of these are discussed appropriately in the paper.",
"Section 2.2 describes a way to compress the label embedding representation, but it is not clear if this is actually used in the experiments.",
"h is never discussed after section 2.2.",
"Experiments on known datasets are interesting, but none of the results are competitive with current state-of-the-art results (SOTA), despite what is said in Appending D. For instance, one can find SOTA results for CIFAR100 around 16% and for CIFAR10 around 3%.",
"Similarly, one can find SOTA results for IWSLT2015 around 28 BLEU.",
"It can be fine to not be SOTA as long as it is acknowledged and discussed appropriately."
] | {
"criticism": 7,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 9,
"praise": 2,
"presentation_and_reporting": 9,
"results_and_discussion": 4,
"suggestion_and_solution": 4,
"total": 17
} | 0.411765 | 0.058824 | 0.058824 | 0.529412 | 0.117647 | 0.529412 | 0.235294 | 0.235294 | 17 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.176471 | 1.923952 | 0.252519 |
Semantic Interpolation in Implicit Models | In implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths. | The authors propose the use of a gamma prior as the distribution over the latent representation space in GANs. The motivation behind it is that in GANs interpolating between sampled points is common in the process of generating examples but the use of a normal prior results in samples that fall in low probability mass regions. The use of the proposed gamma distribution, as a simple alternative, overcomes this problem. In general, the proposed work is very interesting and the idea is neat. The paper is well presented and I want to underline the importance of this. The authors did a very good job presenting the problem, motivation and solution in a coherent fashion and easy to follow. The work itself is interesting and can provide useful alternatives for the distribution over the latent space. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemantic Interpolation in Implicit Models\nIn implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors propose the use of a gamma prior as the distribution over the latent representation space in GANs. The motivation behind it is that in GANs interpolating between sampled points is common in the process of generating examples but the use of a normal prior results in samples that fall in low probability mass regions. The use of the proposed gamma distribution, as a simple alternative, overcomes this problem. In general, the proposed work is very interesting and the idea is neat. The paper is well presented and I want to underline the importance of this. The authors did a very good job presenting the problem, motivation and solution in a coherent fashion and easy to follow. The work itself is interesting and can provide useful alternatives for the distribution over the latent space.",
"role": "assistant"
}
]
| [
" The authors propose the use of a gamma prior as the distribution over the latent representation space in GANs.",
"The motivation behind it is that in GANs interpolating between sampled points is common in the process of generating examples but the use of a normal prior results in samples that fall in low probability mass regions.",
"The use of the proposed gamma distribution, as a simple alternative, overcomes this problem.",
"In general, the proposed work is very interesting and the idea is neat.",
"The paper is well presented and I want to underline the importance of this.",
"The authors did a very good job presenting the problem, motivation and solution in a coherent fashion and easy to follow.",
"The work itself is interesting and can provide useful alternatives for the distribution over the latent space."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 4,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 7
} | 0 | 0 | 0.571429 | 0.571429 | 0.571429 | 0.142857 | 0.285714 | 0 | 7 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.142857 | 1.57469 | 0.568167 |
Semantic Interpolation in Implicit Models | In implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths. | The authors discuss a direct Gamma sampling method for the interpolated samples in GANs, and show the improvements over usual normal sampling for CelebA, MNIST, CIFAR and SVHN datasets. The method involves a nice, albeit minor, trick, where the chi-squared distribution of the sum of the z_{i}^{2} has its dependence on the dimensionality removed. However I am not convinced by the distribution of .|z^.prime.|^{2} in the first place (eqn (2)): the samples from the gaussian will be approximately orthogonal in high dimensions, but the inner product will be at least O(1). Thus although the .|z_{0}.|^{2} and .|z_{1}.|^{2} are chi-squared/gamma, I don*t think .|z^.prime.|^{2} is exactly gamma in general. The experiments do show that the interpolated samples are qualitatively better, but a thorough empirical analysis for different dimensionalities would be welcome. Figures 2 and 3 do not add anything to the story, since 2 is just a plot of gamma pdfs and 3 shows the difference between the constant KL and the normal case that is linear in d. Overall I think the trick needs to be motivated better, and the experiments improved to really show the import of the d-independence of the KL. Thus I think this paper is below the acceptance threshold. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemantic Interpolation in Implicit Models\nIn implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors discuss a direct Gamma sampling method for the interpolated samples in GANs, and show the improvements over usual normal sampling for CelebA, MNIST, CIFAR and SVHN datasets. The method involves a nice, albeit minor, trick, where the chi-squared distribution of the sum of the z_{i}^{2} has its dependence on the dimensionality removed. However I am not convinced by the distribution of .|z^.prime.|^{2} in the first place (eqn (2)): the samples from the gaussian will be approximately orthogonal in high dimensions, but the inner product will be at least O(1). Thus although the .|z_{0}.|^{2} and .|z_{1}.|^{2} are chi-squared/gamma, I don*t think .|z^.prime.|^{2} is exactly gamma in general. The experiments do show that the interpolated samples are qualitatively better, but a thorough empirical analysis for different dimensionalities would be welcome. Figures 2 and 3 do not add anything to the story, since 2 is just a plot of gamma pdfs and 3 shows the difference between the constant KL and the normal case that is linear in d. Overall I think the trick needs to be motivated better, and the experiments improved to really show the import of the d-independence of the KL. Thus I think this paper is below the acceptance threshold.",
"role": "assistant"
}
]
| [
" The authors discuss a direct Gamma sampling method for the interpolated samples in GANs, and show the improvements over usual normal sampling for CelebA, MNIST, CIFAR and SVHN datasets.",
"The method involves a nice, albeit minor, trick, where the chi-squared distribution of the sum of the z_{i}^{2} has its dependence on the dimensionality removed.",
"However I am not convinced by the distribution of .|z^.prime.|^{2} in the first place (eqn (2)): the samples from the gaussian will be approximately orthogonal in high dimensions, but the inner product will be at least O(1).",
"Thus although the .|z_{0}.|^{2} and .|z_{1}.|^{2} are chi-squared/gamma, I don*t think .|z^.prime.|^{2} is exactly gamma in general.",
"The experiments do show that the interpolated samples are qualitatively better, but a thorough empirical analysis for different dimensionalities would be welcome.",
"Figures 2 and 3 do not add anything to the story, since 2 is just a plot of gamma pdfs and 3 shows the difference between the constant KL and the normal case that is linear in d. Overall I think the trick needs to be motivated better, and the experiments improved to really show the import of the d-independence of the KL.",
"Thus I think this paper is below the acceptance threshold."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 7
} | 0.428571 | 0.142857 | 0 | 0.714286 | 0.285714 | 0.142857 | 0.142857 | 0.285714 | 7 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
Baseline-corrected space-by-time non-negative matrix factorization for decoding single trial population spike trains | Activity of populations of sensory neurons carries stimulus information in both the temporal and the spatial dimensions. This poses the question of how to compactly represent all the information that the population codes carry across all these dimensions. Here, we developed an analytical method to factorize a large number of retinal ganglion cells* spike trains into a robust low-dimensional representation that captures efficiently both their spatial and temporal information. In particular, we extended previously used single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity. On data recorded from retinal ganglion cells with strong pre-stimulus baseline, we showed that in situations were the stimulus elicits a strong change in firing rate, our extensions yield a boost in stimulus decoding performance. Our results thus suggest that taking into account the baseline can be important for finding a compact information-rich representation of neural activity. | In this contribution, the authors propose an improvement of a tensor decomposition method for decoding spike train. Relying on a non-negative matrix factorization, the authors tackle the influence of the baseline activity on the decomposition. The main consequence is that the retrieved components are not necessarily non-negative and the proposed decomposition rely on signed activation coefficients. An experimental validation shows that for high frequency baseline (> 0.7 Hz), the baseline corrected algorithm yields better classification results than non-corrected version (and other common factorization techniques). The objective function is defined with a Frobenius norm, which has an important influence on the obtained solutions, as it could be seen on Figure 2. The proposed method seems to provide a more discriminant factorization than the NMF one, at the expense of the sparsity of spatial and temporal components, impeding the biological interpretability. A possible solution is to add a regularization term to the objective function to ensure the sparsity of the factorization. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBaseline-corrected space-by-time non-negative matrix factorization for decoding single trial population spike trains\nActivity of populations of sensory neurons carries stimulus information in both the temporal and the spatial dimensions. This poses the question of how to compactly represent all the information that the population codes carry across all these dimensions. Here, we developed an analytical method to factorize a large number of retinal ganglion cells* spike trains into a robust low-dimensional representation that captures efficiently both their spatial and temporal information. In particular, we extended previously used single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity. On data recorded from retinal ganglion cells with strong pre-stimulus baseline, we showed that in situations were the stimulus elicits a strong change in firing rate, our extensions yield a boost in stimulus decoding performance. Our results thus suggest that taking into account the baseline can be important for finding a compact information-rich representation of neural activity.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this contribution, the authors propose an improvement of a tensor decomposition method for decoding spike train. Relying on a non-negative matrix factorization, the authors tackle the influence of the baseline activity on the decomposition. The main consequence is that the retrieved components are not necessarily non-negative and the proposed decomposition rely on signed activation coefficients. An experimental validation shows that for high frequency baseline (> 0.7 Hz), the baseline corrected algorithm yields better classification results than non-corrected version (and other common factorization techniques). The objective function is defined with a Frobenius norm, which has an important influence on the obtained solutions, as it could be seen on Figure 2. The proposed method seems to provide a more discriminant factorization than the NMF one, at the expense of the sparsity of spatial and temporal components, impeding the biological interpretability. A possible solution is to add a regularization term to the objective function to ensure the sparsity of the factorization.",
"role": "assistant"
}
]
| [
" In this contribution, the authors propose an improvement of a tensor decomposition method for decoding spike train.",
"Relying on a non-negative matrix factorization, the authors tackle the influence of the baseline activity on the decomposition.",
"The main consequence is that the retrieved components are not necessarily non-negative and the proposed decomposition rely on signed activation coefficients.",
"An experimental validation shows that for high frequency baseline (> 0.7 Hz), the baseline corrected algorithm yields better classification results than non-corrected version (and other common factorization techniques).",
"The objective function is defined with a Frobenius norm, which has an important influence on the obtained solutions, as it could be seen on Figure 2.",
"The proposed method seems to provide a more discriminant factorization than the NMF one, at the expense of the sparsity of spatial and temporal components, impeding the biological interpretability.",
"A possible solution is to add a regularization term to the objective function to ensure the sparsity of the factorization."
] | {
"criticism": 0,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 7
} | 0 | 0.142857 | 0.142857 | 0.857143 | 0.142857 | 0.142857 | 0.571429 | 0.142857 | 7 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
APPLICATION OF DEEP CONVOLUTIONAL NEURAL NETWORK TO PREVENT ATM FRAUD BY FACIAL DISGUISE IDENTIFICATION | The paper proposes and demonstrates a Deep Convolutional Neural Network (DCNN) architecture to identify users with disguised face attempting a fraudulent ATM transaction. The recent introduction of Disguised Face Identification (DFI) framework proves the applicability of deep neural networks for this very problem. All the ATMs nowadays incorporate a hidden camera in them and capture the footage of their users. However, it is impossible for the police to track down the impersonators with disguised faces from the ATM footage. The proposed deep convolutional neural network is trained to identify, in real time, whether the user in the captured image is trying to cloak his identity or not. The output of the DCNN is then reported to the ATM to take appropriate steps and prevent the swindler from completing the transaction. The network is trained using a dataset of images captured in similar situations as of an ATM. The comparatively low background clutter in the images enables the network to demonstrate high accuracy in feature extraction and classification for all the different disguises. | This paper is an application paper on detecting when a face is disguised, however it is poorly written and do not contribute much in terms of novelty of the approach. The application domain is interesting, however it is simply a classification problem The paper is written clearly (with mistakes in an equation), however, it does not contribute much in terms of novelty or new ideas. To make the paper better, more empirical results are needed. In addition, it would be useful to investigate how this particular problem is different than a binary classification problem using CNNs. Notes: Equation 2 has a typo, *** | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAPPLICATION OF DEEP CONVOLUTIONAL NEURAL NETWORK TO PREVENT ATM FRAUD BY FACIAL DISGUISE IDENTIFICATION\nThe paper proposes and demonstrates a Deep Convolutional Neural Network (DCNN) architecture to identify users with disguised face attempting a fraudulent ATM transaction. The recent introduction of Disguised Face Identification (DFI) framework proves the applicability of deep neural networks for this very problem. All the ATMs nowadays incorporate a hidden camera in them and capture the footage of their users. However, it is impossible for the police to track down the impersonators with disguised faces from the ATM footage. The proposed deep convolutional neural network is trained to identify, in real time, whether the user in the captured image is trying to cloak his identity or not. The output of the DCNN is then reported to the ATM to take appropriate steps and prevent the swindler from completing the transaction. The network is trained using a dataset of images captured in similar situations as of an ATM. The comparatively low background clutter in the images enables the network to demonstrate high accuracy in feature extraction and classification for all the different disguises.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper is an application paper on detecting when a face is disguised, however it is poorly written and do not contribute much in terms of novelty of the approach. The application domain is interesting, however it is simply a classification problem The paper is written clearly (with mistakes in an equation), however, it does not contribute much in terms of novelty or new ideas. To make the paper better, more empirical results are needed. In addition, it would be useful to investigate how this particular problem is different than a binary classification problem using CNNs. Notes: Equation 2 has a typo, ***",
"role": "assistant"
}
]
| [
" This paper is an application paper on detecting when a face is disguised, however it is poorly written and do not contribute much in terms of novelty of the approach.",
"The application domain is interesting, however it is simply a classification problem The paper is written clearly (with mistakes in an equation), however, it does not contribute much in terms of novelty or new ideas.",
"To make the paper better, more empirical results are needed.",
"In addition, it would be useful to investigate how this particular problem is different than a binary classification problem using CNNs.",
"Notes: Equation 2 has a typo, ***"
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 5
} | 0.6 | 0 | 0.6 | 0.6 | 0.2 | 0.6 | 0.2 | 0.4 | 5 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3.2 | 2.189924 | 1.010076 |
Exploring Deep Recurrent Models with Reinforcement Learning for Molecule Design | The design of small molecules with bespoke properties is of central importance to drug discovery. However significant challenges yet remain for computational methods, despite recent advances such as deep recurrent networks and reinforcement learning strategies for sequence generation, and it can be difficult to compare results across different works. This work proposes 19 benchmarks selected by subject experts, expands smaller datasets previously used to approximately 1.1 million training molecules, and explores how to apply new reinforcement learning techniques effectively for molecular design. The benchmarks here, built as OpenAI Gym environments, will be open-sourced to encourage innovation in molecular design algorithms and to enable usage by those without a background in chemistry. Finally, this work explores recent development in reinforcement-learning methods with excellent sample complexity (the A2C and PPO algorithms) and investigates their behavior in molecular generation, demonstrating significant performance gains compared to standard reinforcement learning techniques. | Summary: This paper studies a series of reinforcement learning (RL) techniques in combination with recurrent neural networks (RNNs) to model and synthesise molecules. The experiments seem extensive, using many recently proposed RL methods, and show that most sophisticated RL methods are less effective than the simple hill-climbing technique, with PPO is perhaps the only exception. Originality and significance: The conclusion from the experiments could be valuable to the broader sequence generation/synthesis field, showing that many current RL techniques can fail dramatically. The paper does not provide any theoretical contribution but nevertheless is a good application paper combining and comparing different techniques. Clarity: The paper is generally well-written. However, I*m not an expert in molecule design, so might not have caught any trivial errors in the experimental set-up. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExploring Deep Recurrent Models with Reinforcement Learning for Molecule Design\nThe design of small molecules with bespoke properties is of central importance to drug discovery. However significant challenges yet remain for computational methods, despite recent advances such as deep recurrent networks and reinforcement learning strategies for sequence generation, and it can be difficult to compare results across different works. This work proposes 19 benchmarks selected by subject experts, expands smaller datasets previously used to approximately 1.1 million training molecules, and explores how to apply new reinforcement learning techniques effectively for molecular design. The benchmarks here, built as OpenAI Gym environments, will be open-sourced to encourage innovation in molecular design algorithms and to enable usage by those without a background in chemistry. Finally, this work explores recent development in reinforcement-learning methods with excellent sample complexity (the A2C and PPO algorithms) and investigates their behavior in molecular generation, demonstrating significant performance gains compared to standard reinforcement learning techniques.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: This paper studies a series of reinforcement learning (RL) techniques in combination with recurrent neural networks (RNNs) to model and synthesise molecules. The experiments seem extensive, using many recently proposed RL methods, and show that most sophisticated RL methods are less effective than the simple hill-climbing technique, with PPO is perhaps the only exception. Originality and significance: The conclusion from the experiments could be valuable to the broader sequence generation/synthesis field, showing that many current RL techniques can fail dramatically. The paper does not provide any theoretical contribution but nevertheless is a good application paper combining and comparing different techniques. Clarity: The paper is generally well-written. However, I*m not an expert in molecule design, so might not have caught any trivial errors in the experimental set-up.",
"role": "assistant"
}
]
| [
" Summary: This paper studies a series of reinforcement learning (RL) techniques in combination with recurrent neural networks (RNNs) to model and synthesise molecules.",
"The experiments seem extensive, using many recently proposed RL methods, and show that most sophisticated RL methods are less effective than the simple hill-climbing technique, with PPO is perhaps the only exception.",
"Originality and significance: The conclusion from the experiments could be valuable to the broader sequence generation/synthesis field, showing that many current RL techniques can fail dramatically.",
"The paper does not provide any theoretical contribution but nevertheless is a good application paper combining and comparing different techniques.",
"Clarity: The paper is generally well-written.",
"However, I*m not an expert in molecule design, so might not have caught any trivial errors in the experimental set-up."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 6
} | 0.166667 | 0 | 0.333333 | 0.833333 | 0.5 | 0.166667 | 0.166667 | 0 | 6 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.166667 | 1.393328 | 0.773339 |
PACT: Parameterized Clipping Activation for Quantized Neural Networks | Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemeshave been proposed - but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training - that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTi-vation (PACT), uses an activation clipping parameter ? that is optimized duringtraining to find the right quantization scale. PACT allows quantizing activations toarbitrary bit precisions, while achieving much better accuracy relative to publishedstate-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance dueto a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories. | The authors have addressed my concerns, and clarified a misunderstanding of the baseline that I had, which I appreciate. I do think that it is a solid contribution with thorough experiments. I still keep my original rating of the paper because the method presented is heavily based on previous works, which limits the novelty of the paper. It uses previously proposed clipping activation function for quantization of neural networks, adding a learnable parameter to this function. _______________ ORIGINAL REVIEW: This paper proposes to use a clipping activation function as a replacement of ReLu to train a neural network with quantized weights and activations. It shows empirically that even though the clipping activation function obtains a larger training error for full-precision model, it maintains the same error when applying quantization, whereas training with quantized ReLu activation function does not work in practice because it is unbounded. The experiments are thorough, and report results on many datasets, showing that PACT can reduce down to 4 bits of quantization of weights and activation with a slight loss in accuracy compared to the full-precision model. Related to that, it seams a bit an over claim to state that the accuracy decrease of quantizing the DNN with PACT in comparison with previous quantization methods is much less because the decrease is smaller or equal than 1%, when competing methods accuracy decrease compared to the full-precision model is more than 1%. Also, it is unfair to compare to the full-precision model using clipping, because ReLu activation function in full-precision is the standard and gives much better results, and this should be the reference accuracy. Also, previous methods take as reference the model with ReLu activation function, so it could be that in absolute value the accuracy performance of competing methods is actually higher than when using PACT for quantizing DNN. OTHER COMMENTS: - the list of contributions is a bit strange. It seams that the true contribution is number 1 on the list, which is to introduce the parameter .alpha in the activation function that is learned with back-propagation, which reduces the quantization error with respect to using ReLu as activation function. To provide an analysis of why it works and quantitative results, is part of the same contribution I would say. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPACT: Parameterized Clipping Activation for Quantized Neural Networks\nDeep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemeshave been proposed - but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training - that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTi-vation (PACT), uses an activation clipping parameter ? that is optimized duringtraining to find the right quantization scale. PACT allows quantizing activations toarbitrary bit precisions, while achieving much better accuracy relative to publishedstate-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance dueto a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors have addressed my concerns, and clarified a misunderstanding of the baseline that I had, which I appreciate. I do think that it is a solid contribution with thorough experiments. I still keep my original rating of the paper because the method presented is heavily based on previous works, which limits the novelty of the paper. It uses previously proposed clipping activation function for quantization of neural networks, adding a learnable parameter to this function. _______________ ORIGINAL REVIEW: This paper proposes to use a clipping activation function as a replacement of ReLu to train a neural network with quantized weights and activations. It shows empirically that even though the clipping activation function obtains a larger training error for full-precision model, it maintains the same error when applying quantization, whereas training with quantized ReLu activation function does not work in practice because it is unbounded. The experiments are thorough, and report results on many datasets, showing that PACT can reduce down to 4 bits of quantization of weights and activation with a slight loss in accuracy compared to the full-precision model. Related to that, it seams a bit an over claim to state that the accuracy decrease of quantizing the DNN with PACT in comparison with previous quantization methods is much less because the decrease is smaller or equal than 1%, when competing methods accuracy decrease compared to the full-precision model is more than 1%. Also, it is unfair to compare to the full-precision model using clipping, because ReLu activation function in full-precision is the standard and gives much better results, and this should be the reference accuracy. Also, previous methods take as reference the model with ReLu activation function, so it could be that in absolute value the accuracy performance of competing methods is actually higher than when using PACT for quantizing DNN. OTHER COMMENTS: - the list of contributions is a bit strange. It seams that the true contribution is number 1 on the list, which is to introduce the parameter .alpha in the activation function that is learned with back-propagation, which reduces the quantization error with respect to using ReLu as activation function. To provide an analysis of why it works and quantitative results, is part of the same contribution I would say.",
"role": "assistant"
}
]
| [
" The authors have addressed my concerns, and clarified a misunderstanding of the baseline that I had, which I appreciate.",
"I do think that it is a solid contribution with thorough experiments.",
"I still keep my original rating of the paper because the method presented is heavily based on previous works, which limits the novelty of the paper.",
"It uses previously proposed clipping activation function for quantization of neural networks, adding a learnable parameter to this function.",
"_______________ ORIGINAL REVIEW: This paper proposes to use a clipping activation function as a replacement of ReLu to train a neural network with quantized weights and activations.",
"It shows empirically that even though the clipping activation function obtains a larger training error for full-precision model, it maintains the same error when applying quantization, whereas training with quantized ReLu activation function does not work in practice because it is unbounded.",
"The experiments are thorough, and report results on many datasets, showing that PACT can reduce down to 4 bits of quantization of weights and activation with a slight loss in accuracy compared to the full-precision model.",
"Related to that, it seams a bit an over claim to state that the accuracy decrease of quantizing the DNN with PACT in comparison with previous quantization methods is much less because the decrease is smaller or equal than 1%, when competing methods accuracy decrease compared to the full-precision model is more than 1%.",
"Also, it is unfair to compare to the full-precision model using clipping, because ReLu activation function in full-precision is the standard and gives much better results, and this should be the reference accuracy.",
"Also, previous methods take as reference the model with ReLu activation function, so it could be that in absolute value the accuracy performance of competing methods is actually higher than when using PACT for quantizing DNN.",
"OTHER COMMENTS: - the list of contributions is a bit strange.",
"It seams that the true contribution is number 1 on the list, which is to introduce the parameter .alpha in the activation function that is learned with back-propagation, which reduces the quantization error with respect to using ReLu as activation function.",
"To provide an analysis of why it works and quantitative results, is part of the same contribution I would say."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 0,
"results_and_discussion": 7,
"suggestion_and_solution": 3,
"total": 13
} | 0.230769 | 0 | 0.153846 | 0.846154 | 0.230769 | 0 | 0.538462 | 0.230769 | 13 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.230769 | 2.230769 | 0 |
Learning to Compute Word Embeddings On the Fly | Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the ``long tail** of this distribution requires enormous amounts of data. Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task. We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling. | This paper illustrates a method to compute produce word embeddings on the fly for rare words, using a pragmatic combination of existing ideas: * Backing off to a separate decoder for rare words a la Luong and Manning (https://arxiv.org/pdf/1604.00788.pdf, should be cited, though the idea might be older). * Using character-level models a la Ling et al. * Using dictionary embeddings a la Hill et al. None of these ideas are new before but I haven’t seen them combined in this way before. This is a very practical idea, well-explained with a thorough set of experiments across three different tasks. The paper is not surprising but this seems like an effective technique for people who want to build effective systems with whatever data they’ve got. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning to Compute Word Embeddings On the Fly\nWords in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the ``long tail** of this distribution requires enormous amounts of data. Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task. We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper illustrates a method to compute produce word embeddings on the fly for rare words, using a pragmatic combination of existing ideas: * Backing off to a separate decoder for rare words a la Luong and Manning (https://arxiv.org/pdf/1604.00788.pdf, should be cited, though the idea might be older). * Using character-level models a la Ling et al. * Using dictionary embeddings a la Hill et al. None of these ideas are new before but I haven’t seen them combined in this way before. This is a very practical idea, well-explained with a thorough set of experiments across three different tasks. The paper is not surprising but this seems like an effective technique for people who want to build effective systems with whatever data they’ve got.",
"role": "assistant"
}
]
| [
" This paper illustrates a method to compute produce word embeddings on the fly for rare words, using a pragmatic combination of existing ideas: * Backing off to a separate decoder for rare words a la Luong and Manning (https://arxiv.org/pdf/1604.00788.pdf, should be cited, though the idea might be older).",
"* Using character-level models a la Ling et al.",
"* Using dictionary embeddings a la Hill et al.",
"None of these ideas are new before but I haven’t seen them combined in this way before.",
"This is a very practical idea, well-explained with a thorough set of experiments across three different tasks.",
"The paper is not surprising but this seems like an effective technique for people who want to build effective systems with whatever data they’ve got."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 4,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 6
} | 0.333333 | 0 | 0.333333 | 0.666667 | 0.166667 | 0.666667 | 0 | 0.166667 | 6 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 2.333333 | 1.559994 | 0.773339 |
Associative Conversation Model: Generating Visual Information from Textual Information | In this paper, we propose the Associative Conversation Model that generates visual information from textual information and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. In research on Neural Machine Translation, there are studies that generate translated sentences using both images and sentences, and these studies show that visual information improves translation performance. However, it is not possible to use sentence generation algorithms using images for the dialogue systems since many text-based dialogue systems only accept text input. Our approach generates (associates) visual information from input text and generates response text using context vector fusing associative visual information and sentence textual information. A comparative experiment between our proposed model and a model without association showed that our proposed model is generating useful sentences by associating visual information related to sentences. Furthermore, analysis experiment of visual association showed that our proposed model generates (associates) visual information effective for sentence generation. | The authors describe a method to be used in text dialogue systems. The contribution of the paper relies on the usage of visual information to enhance the performance of a dialogue system. An input phrase is expanded with visual information (visual context vectors), next visual and textual information is merged in a deep model that provides the final answer. Although the idea is interesting, I found the methodology fairly straightforward (only using known models in each step) and not having a particular component as contribution. What is more, the experimental evaluation is limited and purely qualitative: a few examples are provided. Also, I am not convinced on the evaluation protocol: the authors used captions from video+videos as input data, and used "the next caption" as the response of the dialogue system. I am not very familiar with this type of systems, but it is clear to me that the evaluation is biased and does not prove the working hypothesis of the authors. Finally, no comparison with related works is provided. English needs to be polished | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAssociative Conversation Model: Generating Visual Information from Textual Information\nIn this paper, we propose the Associative Conversation Model that generates visual information from textual information and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. In research on Neural Machine Translation, there are studies that generate translated sentences using both images and sentences, and these studies show that visual information improves translation performance. However, it is not possible to use sentence generation algorithms using images for the dialogue systems since many text-based dialogue systems only accept text input. Our approach generates (associates) visual information from input text and generates response text using context vector fusing associative visual information and sentence textual information. A comparative experiment between our proposed model and a model without association showed that our proposed model is generating useful sentences by associating visual information related to sentences. Furthermore, analysis experiment of visual association showed that our proposed model generates (associates) visual information effective for sentence generation.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors describe a method to be used in text dialogue systems. The contribution of the paper relies on the usage of visual information to enhance the performance of a dialogue system. An input phrase is expanded with visual information (visual context vectors), next visual and textual information is merged in a deep model that provides the final answer. Although the idea is interesting, I found the methodology fairly straightforward (only using known models in each step) and not having a particular component as contribution. What is more, the experimental evaluation is limited and purely qualitative: a few examples are provided. Also, I am not convinced on the evaluation protocol: the authors used captions from video+videos as input data, and used \"the next caption\" as the response of the dialogue system. I am not very familiar with this type of systems, but it is clear to me that the evaluation is biased and does not prove the working hypothesis of the authors. Finally, no comparison with related works is provided. English needs to be polished",
"role": "assistant"
}
]
| [
" The authors describe a method to be used in text dialogue systems.",
"The contribution of the paper relies on the usage of visual information to enhance the performance of a dialogue system.",
"An input phrase is expanded with visual information (visual context vectors), next visual and textual information is merged in a deep model that provides the final answer.",
"Although the idea is interesting, I found the methodology fairly straightforward (only using known models in each step) and not having a particular component as contribution.",
"What is more, the experimental evaluation is limited and purely qualitative: a few examples are provided.",
"Also, I am not convinced on the evaluation protocol: the authors used captions from video+videos as input data, and used \"the next caption\" as the response of the dialogue system.",
"I am not very familiar with this type of systems, but it is clear to me that the evaluation is biased and does not prove the working hypothesis of the authors.",
"Finally, no comparison with related works is provided.",
"English needs to be polished"
] | {
"criticism": 5,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 4,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 9
} | 0.555556 | 0.111111 | 0.222222 | 0.666667 | 0.222222 | 0.444444 | 0 | 0.111111 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 2.333333 | 2.080814 | 0.252519 |
Building effective deep neural networks one feature at a time | Successful training of convolutional neural networks is often associated with suffi- ciently deep architectures composed of high amounts of features. These networks typically rely on a variety of regularization and pruning techniques to converge to less redundant states. We introduce a novel bottom-up approach to expand representations in fixed-depth architectures. These architectures start from just a single feature per layer and greedily increase width of individual layers to attain effective representational capacities needed for a specific task. While network growth can rely on a family of metrics, we propose a computationally efficient version based on feature time evolution and demonstrate its potency in determin- ing feature importance and a networks’ effective capacity. We demonstrate how automatically expanded architectures converge to similar topologies that benefit from lesser amount of parameters or improved accuracy and exhibit systematic correspondence in representational complexity with the specified task. In contrast to conventional design patterns with a typical monotonic increase in the amount of features with increased depth, we observe that CNNs perform better when there is more learnable parameters in intermediate, with falloffs to earlier and later layers. | This paper introduces a simple correlation-based metric to measure whether filters in neural networks are being used effectively, as a proxy for effective capacity. The authors then introduce a greedy algorithm that expands the different layers in a neural network until the metric indicates that additional features will end up not being used effectively. The application of this algorithm is shown to lead to architectures that differ substantially from hand-designed models with the same number of layers: most of the parameters end up in intermediate layers, with fewer parameters in earlier and later layers. This indicates that common heuristics to divide capacity over the layers of a network are suboptimal, as they tend to put most parameters in later layers. It*s also nice that simpler tasks yield smaller models (e.g. MNIST vs. CIFAR in figure 3). The experimental section is comprehensive and the results are convincing. I especially appreciate the detailed analysis of the results (figure 3 is great). Although most experiments were conducted on the classic benchmark datasets of MNIST, CIFAR-10 and CIFAR-100, the paper also includes some promising preliminary results on ImageNet, which nicely demonstrates that the technique scales to more practical problems as well. That said, it would be nice to demonstrate that the algorithm also works for other tasks than image classification. I also like the alternative perspective compared to pruning approaches, which most research seems to have been focused on in the past. The observation that the cross-correlation of a weight vector with its initial values is a good measure for effective filter use seems obvious in retrospect, but hindsight is 20/20 and the fact is that apparently this hasn*t been tried before. It is definitely surprising that a simple method like this ends up working this well. The fact that all parameters are reinitialised whenever any layer width changes seems odd at first, but I think it is sufficiently justified. It would be nice to see some comparison experiments as well though, as the intuitive thing to do would be to just keep the existing weights as they are. Other remarks: Formula (2) seems needlessly complicated because of all the additional indices. Maybe removing some of those would make things easier to parse. It would also help to mention that it is basically just a normalised cross-correlation. This is mentioned two paragraphs down, but should probably be mentioned right before the formula is given instead. page 6, section 3.1: "it requires convergent training of a huge architecture with lots of regularization before complexity can be introduced", I guess this should be "reduced" instead of "introduced". | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBuilding effective deep neural networks one feature at a time\nSuccessful training of convolutional neural networks is often associated with suffi- ciently deep architectures composed of high amounts of features. These networks typically rely on a variety of regularization and pruning techniques to converge to less redundant states. We introduce a novel bottom-up approach to expand representations in fixed-depth architectures. These architectures start from just a single feature per layer and greedily increase width of individual layers to attain effective representational capacities needed for a specific task. While network growth can rely on a family of metrics, we propose a computationally efficient version based on feature time evolution and demonstrate its potency in determin- ing feature importance and a networks’ effective capacity. We demonstrate how automatically expanded architectures converge to similar topologies that benefit from lesser amount of parameters or improved accuracy and exhibit systematic correspondence in representational complexity with the specified task. In contrast to conventional design patterns with a typical monotonic increase in the amount of features with increased depth, we observe that CNNs perform better when there is more learnable parameters in intermediate, with falloffs to earlier and later layers.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper introduces a simple correlation-based metric to measure whether filters in neural networks are being used effectively, as a proxy for effective capacity. The authors then introduce a greedy algorithm that expands the different layers in a neural network until the metric indicates that additional features will end up not being used effectively. The application of this algorithm is shown to lead to architectures that differ substantially from hand-designed models with the same number of layers: most of the parameters end up in intermediate layers, with fewer parameters in earlier and later layers. This indicates that common heuristics to divide capacity over the layers of a network are suboptimal, as they tend to put most parameters in later layers. It*s also nice that simpler tasks yield smaller models (e.g. MNIST vs. CIFAR in figure 3). The experimental section is comprehensive and the results are convincing. I especially appreciate the detailed analysis of the results (figure 3 is great). Although most experiments were conducted on the classic benchmark datasets of MNIST, CIFAR-10 and CIFAR-100, the paper also includes some promising preliminary results on ImageNet, which nicely demonstrates that the technique scales to more practical problems as well. That said, it would be nice to demonstrate that the algorithm also works for other tasks than image classification. I also like the alternative perspective compared to pruning approaches, which most research seems to have been focused on in the past. The observation that the cross-correlation of a weight vector with its initial values is a good measure for effective filter use seems obvious in retrospect, but hindsight is 20/20 and the fact is that apparently this hasn*t been tried before. It is definitely surprising that a simple method like this ends up working this well. The fact that all parameters are reinitialised whenever any layer width changes seems odd at first, but I think it is sufficiently justified. It would be nice to see some comparison experiments as well though, as the intuitive thing to do would be to just keep the existing weights as they are. Other remarks: Formula (2) seems needlessly complicated because of all the additional indices. Maybe removing some of those would make things easier to parse. It would also help to mention that it is basically just a normalised cross-correlation. This is mentioned two paragraphs down, but should probably be mentioned right before the formula is given instead. page 6, section 3.1: \"it requires convergent training of a huge architecture with lots of regularization before complexity can be introduced\", I guess this should be \"reduced\" instead of \"introduced\".",
"role": "assistant"
}
]
| [
" This paper introduces a simple correlation-based metric to measure whether filters in neural networks are being used effectively, as a proxy for effective capacity.",
"The authors then introduce a greedy algorithm that expands the different layers in a neural network until the metric indicates that additional features will end up not being used effectively.",
"The application of this algorithm is shown to lead to architectures that differ substantially from hand-designed models with the same number of layers: most of the parameters end up in intermediate layers, with fewer parameters in earlier and later layers.",
"This indicates that common heuristics to divide capacity over the layers of a network are suboptimal, as they tend to put most parameters in later layers.",
"It*s also nice that simpler tasks yield smaller models (e.g.",
"MNIST vs. CIFAR in figure 3).",
"The experimental section is comprehensive and the results are convincing.",
"I especially appreciate the detailed analysis of the results (figure 3 is great).",
"Although most experiments were conducted on the classic benchmark datasets of MNIST, CIFAR-10 and CIFAR-100, the paper also includes some promising preliminary results on ImageNet, which nicely demonstrates that the technique scales to more practical problems as well.",
"That said, it would be nice to demonstrate that the algorithm also works for other tasks than image classification.",
"I also like the alternative perspective compared to pruning approaches, which most research seems to have been focused on in the past.",
"The observation that the cross-correlation of a weight vector with its initial values is a good measure for effective filter use seems obvious in retrospect, but hindsight is 20/20 and the fact is that apparently this hasn*t been tried before.",
"It is definitely surprising that a simple method like this ends up working this well.",
"The fact that all parameters are reinitialised whenever any layer width changes seems odd at first, but I think it is sufficiently justified.",
"It would be nice to see some comparison experiments as well though, as the intuitive thing to do would be to just keep the existing weights as they are.",
"Other remarks: Formula (2) seems needlessly complicated because of all the additional indices.",
"Maybe removing some of those would make things easier to parse.",
"It would also help to mention that it is basically just a normalised cross-correlation.",
"This is mentioned two paragraphs down, but should probably be mentioned right before the formula is given instead.",
"page 6, section 3.1: \"it requires convergent training of a huge architecture with lots of regularization before complexity can be introduced\", I guess this should be \"reduced\" instead of \"introduced\"."
] | {
"criticism": 1,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 16,
"praise": 4,
"presentation_and_reporting": 5,
"results_and_discussion": 7,
"suggestion_and_solution": 7,
"total": 20
} | 0.05 | 0.1 | 0.1 | 0.8 | 0.2 | 0.25 | 0.35 | 0.35 | 20 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 1.426661 | 0.773339 |
Learning an Embedding Space for Transferable Robot Skills | We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropy-regularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards. | The submission tackles an important problem of learning and transferring multiple motor skills. The approach relies on using an embedding space defined by latent variables and entropy-regularized policy gradient / variational inference formulation that encourages diversity and identifiability in latent space. The exposition is clear and the method is well-motivated. I see no issues with the mathematical correctness of the claims made in the paper. The experimental results are both instructive of how the algorithm operates (in the particle example), and contain impressive robotic results. I appreciated the experiments that investigated cases where true number of tasks and the parameter T differ, showing that the approach is robust to choice of T. The submission focuses particularly on discrete tasks and learning to sequence discrete tasks (as training requires a one-hot task ID input). I would like a bit of discussion on whether parameterized skills (that have continuous space of target location, or environment parameters, for example) can be supported in the current formulation, and what would be necessary if not. Overall, I believe this is in interesting piece of work at a fruitful intersection of reinforcement learning and variational inference, and I believe would be of interest to ICLR community. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning an Embedding Space for Transferable Robot Skills\nWe present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropy-regularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The submission tackles an important problem of learning and transferring multiple motor skills. The approach relies on using an embedding space defined by latent variables and entropy-regularized policy gradient / variational inference formulation that encourages diversity and identifiability in latent space. The exposition is clear and the method is well-motivated. I see no issues with the mathematical correctness of the claims made in the paper. The experimental results are both instructive of how the algorithm operates (in the particle example), and contain impressive robotic results. I appreciated the experiments that investigated cases where true number of tasks and the parameter T differ, showing that the approach is robust to choice of T. The submission focuses particularly on discrete tasks and learning to sequence discrete tasks (as training requires a one-hot task ID input). I would like a bit of discussion on whether parameterized skills (that have continuous space of target location, or environment parameters, for example) can be supported in the current formulation, and what would be necessary if not. Overall, I believe this is in interesting piece of work at a fruitful intersection of reinforcement learning and variational inference, and I believe would be of interest to ICLR community.",
"role": "assistant"
}
]
| [
" The submission tackles an important problem of learning and transferring multiple motor skills.",
"The approach relies on using an embedding space defined by latent variables and entropy-regularized policy gradient / variational inference formulation that encourages diversity and identifiability in latent space.",
"The exposition is clear and the method is well-motivated.",
"I see no issues with the mathematical correctness of the claims made in the paper.",
"The experimental results are both instructive of how the algorithm operates (in the particle example), and contain impressive robotic results.",
"I appreciated the experiments that investigated cases where true number of tasks and the parameter T differ, showing that the approach is robust to choice of T. The submission focuses particularly on discrete tasks and learning to sequence discrete tasks (as training requires a one-hot task ID input).",
"I would like a bit of discussion on whether parameterized skills (that have continuous space of target location, or environment parameters, for example) can be supported in the current formulation, and what would be necessary if not.",
"Overall, I believe this is in interesting piece of work at a fruitful intersection of reinforcement learning and variational inference, and I believe would be of interest to ICLR community."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 5,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 8
} | 0.125 | 0 | 0.25 | 0.625 | 0.625 | 0 | 0.375 | 0.125 | 8 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries. | The paper proposes a model for abstractive document summarization using a self-critical policy gradient training algorithm, which is mixed with maximum likelihood objective. The Seq2seq architecture incorporates both intra-temporal and intra-decoder attention, and a pointer copying mechanism. A hard constraint is imposed during decoding to avoid trigram repetition. Most of the modelling ideas already exists, but this paper show how they can be applied as a strong summarization model. The approach obtains strong results on the CNN/Daily Mail and NYT datasets. Results show that intra-attention improves performance for only one of the datasets. RL results are reported with only the best-performing attention setup for each dataset. My concern with that is that the authors might be using the test set for model selection; It is not a priori clear that the setup that works better for ML should also be better for RL, especially as it is not the same across datasets. So I suggest that results for RL should be reported with and without intra-attention on both datasets, at least on the validation set. It is shown that intra-decoder attention decoder improves performance on longer sentences. It would be interesting to see more analysis on this, especially analyzing what the mechanism is attending to, as it is less clear what its interpretation should be than for intra-temporal attention. Further ablations such as the effect of the trigram repetition constraint will also help to analyse the contribution of different modelling choices to the performance. For the mixed decoding objective, how is the mixing weight chosen and what is its effect on performance? If it is purely a scaling factor, how is the scale quantified? It is claimed that readability correlates with perplexity, so it would be interesting to see perplexity results for the models. The lack of correlation between automatic and human evaluation raises interesting questions about the evaluation of abstractive summarization that should be investigated further in future work. This is a strong paper that presents a significant improvement in document summarization. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Deep Reinforced Model for Abstractive Summarization\nAttentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit \"exposure bias\" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes a model for abstractive document summarization using a self-critical policy gradient training algorithm, which is mixed with maximum likelihood objective. The Seq2seq architecture incorporates both intra-temporal and intra-decoder attention, and a pointer copying mechanism. A hard constraint is imposed during decoding to avoid trigram repetition. Most of the modelling ideas already exists, but this paper show how they can be applied as a strong summarization model. The approach obtains strong results on the CNN/Daily Mail and NYT datasets. Results show that intra-attention improves performance for only one of the datasets. RL results are reported with only the best-performing attention setup for each dataset. My concern with that is that the authors might be using the test set for model selection; It is not a priori clear that the setup that works better for ML should also be better for RL, especially as it is not the same across datasets. So I suggest that results for RL should be reported with and without intra-attention on both datasets, at least on the validation set. It is shown that intra-decoder attention decoder improves performance on longer sentences. It would be interesting to see more analysis on this, especially analyzing what the mechanism is attending to, as it is less clear what its interpretation should be than for intra-temporal attention. Further ablations such as the effect of the trigram repetition constraint will also help to analyse the contribution of different modelling choices to the performance. For the mixed decoding objective, how is the mixing weight chosen and what is its effect on performance? If it is purely a scaling factor, how is the scale quantified? It is claimed that readability correlates with perplexity, so it would be interesting to see perplexity results for the models. The lack of correlation between automatic and human evaluation raises interesting questions about the evaluation of abstractive summarization that should be investigated further in future work. This is a strong paper that presents a significant improvement in document summarization.",
"role": "assistant"
}
]
| [
" The paper proposes a model for abstractive document summarization using a self-critical policy gradient training algorithm, which is mixed with maximum likelihood objective.",
"The Seq2seq architecture incorporates both intra-temporal and intra-decoder attention, and a pointer copying mechanism.",
"A hard constraint is imposed during decoding to avoid trigram repetition.",
"Most of the modelling ideas already exists, but this paper show how they can be applied as a strong summarization model.",
"The approach obtains strong results on the CNN/Daily Mail and NYT datasets.",
"Results show that intra-attention improves performance for only one of the datasets.",
"RL results are reported with only the best-performing attention setup for each dataset.",
"My concern with that is that the authors might be using the test set for model selection; It is not a priori clear that the setup that works better for ML should also be better for RL, especially as it is not the same across datasets.",
"So I suggest that results for RL should be reported with and without intra-attention on both datasets, at least on the validation set.",
"It is shown that intra-decoder attention decoder improves performance on longer sentences.",
"It would be interesting to see more analysis on this, especially analyzing what the mechanism is attending to, as it is less clear what its interpretation should be than for intra-temporal attention.",
"Further ablations such as the effect of the trigram repetition constraint will also help to analyse the contribution of different modelling choices to the performance.",
"For the mixed decoding objective, how is the mixing weight chosen and what is its effect on performance?",
"If it is purely a scaling factor, how is the scale quantified?",
"It is claimed that readability correlates with perplexity, so it would be interesting to see perplexity results for the models.",
"The lack of correlation between automatic and human evaluation raises interesting questions about the evaluation of abstractive summarization that should be investigated further in future work.",
"This is a strong paper that presents a significant improvement in document summarization."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 14,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 10,
"suggestion_and_solution": 6,
"total": 17
} | 0.117647 | 0 | 0.235294 | 0.823529 | 0.176471 | 0.294118 | 0.588235 | 0.352941 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.588235 | 2.335716 | 0.252519 |
Mitigating Adversarial Effects Through Randomization | Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https://github.com/cihangxie/NIPS2017_adv_challenge_defense. | This paper proposes an extremely simple methodology to improve the network*s performance by adding extra random perturbations (resizing/padding) at evaluation time. Although the paper is very basic, it creates a good baseline for defending about various types of attacks and got good results in kaggle competition. The main merit of the paper is to study this simple but efficient baseline method extensively and shows how adversarial attacks can be mitigated by some extent. Cons of the paper: there is not much novel insight or really exciting new ideas presented. Pros: It gives a convincing very simple baseline and the evaluation of all subsequent results on defending against adversaries will need to incorporate this simple defense method in addition to any future proposed defenses, since it is very easy to implement and evaluate and seems to improve the defense capabilities of the network to a significant degree. So I assume that this paper will be influential in the future just by the virtue of its easy applicability and effectiveness. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMitigating Adversarial Effects Through Randomization\nConvolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https://github.com/cihangxie/NIPS2017_adv_challenge_defense.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes an extremely simple methodology to improve the network*s performance by adding extra random perturbations (resizing/padding) at evaluation time. Although the paper is very basic, it creates a good baseline for defending about various types of attacks and got good results in kaggle competition. The main merit of the paper is to study this simple but efficient baseline method extensively and shows how adversarial attacks can be mitigated by some extent. Cons of the paper: there is not much novel insight or really exciting new ideas presented. Pros: It gives a convincing very simple baseline and the evaluation of all subsequent results on defending against adversaries will need to incorporate this simple defense method in addition to any future proposed defenses, since it is very easy to implement and evaluate and seems to improve the defense capabilities of the network to a significant degree. So I assume that this paper will be influential in the future just by the virtue of its easy applicability and effectiveness.",
"role": "assistant"
}
]
| [
" This paper proposes an extremely simple methodology to improve the network*s performance by adding extra random perturbations (resizing/padding) at evaluation time.",
"Although the paper is very basic, it creates a good baseline for defending about various types of attacks and got good results in kaggle competition.",
"The main merit of the paper is to study this simple but efficient baseline method extensively and shows how adversarial attacks can be mitigated by some extent.",
"Cons of the paper: there is not much novel insight or really exciting new ideas presented.",
"Pros: It gives a convincing very simple baseline and the evaluation of all subsequent results on defending against adversaries will need to incorporate this simple defense method in addition to any future proposed defenses, since it is very easy to implement and evaluate and seems to improve the defense capabilities of the network to a significant degree.",
"So I assume that this paper will be influential in the future just by the virtue of its easy applicability and effectiveness."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 3,
"praise": 4,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 6
} | 0.166667 | 0 | 0.5 | 0.5 | 0.666667 | 0 | 0.333333 | 0.166667 | 6 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.333333 | 1.559994 | 0.773339 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.