title
stringlengths 2
153
| abstract
stringlengths 0
2.15k
| review
stringlengths 97
8.43k
| source
stringclasses 4
values | review_src
stringclasses 1
value | thinking_trace
stringclasses 1
value | conversations
listlengths 2
2
| review_sentences
sequencelengths 1
61
| criteria_count
dict | criticism
float64 0
1
| example
float64 0
1
| importance_and_relevance
float64 0
1
| materials_and_methods
float64 0
1
| praise
float64 0
1
| presentation_and_reporting
float64 0
1
| results_and_discussion
float64 0
1
| suggestion_and_solution
float64 0
1
| total
int64 1
61
| has_criticism
int64 0
1
| has_example
int64 0
1
| has_importance_and_relevance
int64 0
1
| has_materials_and_methods
int64 0
1
| has_praise
int64 0
1
| has_presentation_and_reporting
int64 0
1
| has_results_and_discussion
int64 0
1
| has_suggestion_and_solution
int64 0
1
| reward_value
float64 2.13
6
| reward_value_length_adjusted
float64 -34.13
3.73
| length_penalty
float64 0
36.4
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Making Neural Programming Architectures Generalize via Recursion | Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system’s behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion. | This paper improves significantly upon the original NPI work, showing that the model generalizes far better when trained on traces in recursive form. The authors show better sample complexity and generalization results for addition and bubblesort programs, and add two new and more interesting tasks - topological sort and quicksort (added based on reviewer discussion). Furthermore, they actually *prove* that the algorithms learned by the model generalize perfectly, which to my knowledge is the first time this has been done in neural program induction. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMaking Neural Programming Architectures Generalize via Recursion\nEmpirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system’s behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper improves significantly upon the original NPI work, showing that the model generalizes far better when trained on traces in recursive form. The authors show better sample complexity and generalization results for addition and bubblesort programs, and add two new and more interesting tasks - topological sort and quicksort (added based on reviewer discussion). Furthermore, they actually *prove* that the algorithms learned by the model generalize perfectly, which to my knowledge is the first time this has been done in neural program induction.",
"role": "assistant"
}
]
| [
"This paper improves significantly upon the original NPI work, showing that the model generalizes far better when trained on traces in recursive form.",
"The authors show better sample complexity and generalization results for addition and bubblesort programs, and add two new and more interesting tasks - topological sort and quicksort (added based on reviewer discussion).",
"Furthermore, they actually *prove* that the algorithms learned by the model generalize perfectly, which to my knowledge is the first time this has been done in neural program induction."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0 | 0 | 0.333333 | 1 | 0.666667 | 0 | 0.333333 | 0.333333 | 3 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.666667 | 1.088424 | 1.578243 |
End-to-end Optimized Image Compression | We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM. | This is the most convincing paper on image compression with deep neural networks that I have read so far. The paper is very well written, the use of the rate-distortion theory in the objective fits smoothly in the framework. The paper is compared to a reasonable baseline (JPEG2000, as opposed to previous papers only considering JPEG). I would expect this paper to have a very good impact. Yes, please include results on Lena/Barbara/Baboon (sorry, not Gibbons), along with state-of-the-art references with more classical methods such as the one I mentioned in my questions. I think it is important to clearly state how NN compare to best previous methods. From the submitted version, I still don*t know how both categories of methods are positioned. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnd-to-end Optimized Image Compression\nWe describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is the most convincing paper on image compression with deep neural networks that I have read so far. The paper is very well written, the use of the rate-distortion theory in the objective fits smoothly in the framework. The paper is compared to a reasonable baseline (JPEG2000, as opposed to previous papers only considering JPEG). I would expect this paper to have a very good impact. Yes, please include results on Lena/Barbara/Baboon (sorry, not Gibbons), along with state-of-the-art references with more classical methods such as the one I mentioned in my questions. I think it is important to clearly state how NN compare to best previous methods. From the submitted version, I still don*t know how both categories of methods are positioned.",
"role": "assistant"
}
]
| [
"This is the most convincing paper on image compression with deep neural networks that I have read so far.",
"The paper is very well written, the use of the rate-distortion theory in the objective fits smoothly in the framework.",
"The paper is compared to a reasonable baseline (JPEG2000, as opposed to previous papers only considering JPEG).",
"I would expect this paper to have a very good impact.",
"Yes, please include results on Lena/Barbara/Baboon (sorry, not Gibbons), along with state-of-the-art references with more classical methods such as the one I mentioned in my questions.",
"I think it is important to clearly state how NN compare to best previous methods.",
"From the submitted version, I still don*t know how both categories of methods are positioned."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 7
} | 0.142857 | 0 | 0.142857 | 0.714286 | 0.285714 | 0.428571 | 0.142857 | 0.285714 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
End-to-end Optimized Image Compression | We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM. | This nicely written paper presents an end-to-end learning method for image compression. By optimizing for rate-distortion performance and a clever relaxation the method is able to learn an efficient image compression method by optimizing over a database of natural images. As the method is interesting, results are interesting and analysis is quite thorough it*s easy for me to recommend acceptance. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnd-to-end Optimized Image Compression\nWe describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis nicely written paper presents an end-to-end learning method for image compression. By optimizing for rate-distortion performance and a clever relaxation the method is able to learn an efficient image compression method by optimizing over a database of natural images. As the method is interesting, results are interesting and analysis is quite thorough it*s easy for me to recommend acceptance.",
"role": "assistant"
}
]
| [
"This nicely written paper presents an end-to-end learning method for image compression.",
"By optimizing for rate-distortion performance and a clever relaxation the method is able to learn an efficient image compression method by optimizing over a database of natural images.",
"As the method is interesting, results are interesting and analysis is quite thorough it*s easy for me to recommend acceptance."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 3
} | 0 | 0 | 0.333333 | 1 | 1 | 0.333333 | 0.333333 | 0 | 3 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 3 | 1.421757 | 1.578243 |
Optimization as a Model for Few-Shot Learning | Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning. | This paper describes a new approach to meta learning by interpreting the SGD update rule as gated recurrent model with trainable parameters. The idea is original and important for research related to transfer learning. The paper has a clear structure, but clarity could be improved at some points. Pros: - An interesting and feasible approach to meta-learning - Competitive results and proper comparison to state-of-the-art - Good recommendations for practical systems Cons: - The analogy would be closer to GRUs than LSTMs - The description of the data separation in meta sets is hard to follow and could be visualized - The experimental evaluation is only partly satisfying, especially the effect of the parameters of i_t and f_t would be of interest - Fig 2 doesn*t have much value Remarks: - Small typo in 3.2: *This means each coordinate has it* -> its > We plan on releasing the code used in our evaluation experiments. This would certainly be a major plus. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOptimization as a Model for Few-Shot Learning\nThough deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper describes a new approach to meta learning by interpreting the SGD update rule as gated recurrent model with trainable parameters. The idea is original and important for research related to transfer learning. The paper has a clear structure, but clarity could be improved at some points. Pros: - An interesting and feasible approach to meta-learning - Competitive results and proper comparison to state-of-the-art - Good recommendations for practical systems Cons: - The analogy would be closer to GRUs than LSTMs - The description of the data separation in meta sets is hard to follow and could be visualized - The experimental evaluation is only partly satisfying, especially the effect of the parameters of i_t and f_t would be of interest - Fig 2 doesn*t have much value Remarks: - Small typo in 3.2: *This means each coordinate has it* -> its > We plan on releasing the code used in our evaluation experiments. This would certainly be a major plus.",
"role": "assistant"
}
]
| [
"This paper describes a new approach to meta learning by interpreting the SGD update rule as gated recurrent model with trainable parameters.",
"The idea is original and important for research related to transfer learning.",
"The paper has a clear structure, but clarity could be improved at some points.",
"Pros: - An interesting and feasible approach to meta-learning - Competitive results and proper comparison to state-of-the-art - Good recommendations for practical systems Cons: - The analogy would be closer to GRUs than LSTMs - The description of the data separation in meta sets is hard to follow and could be visualized - The experimental evaluation is only partly satisfying, especially the effect of the parameters of i_t and f_t would be of interest - Fig 2 doesn*t have much value Remarks: - Small typo in 3.2: *This means each coordinate has it* -> its > We plan on releasing the code used in our evaluation experiments.",
"This would certainly be a major plus."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 2,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 5
} | 0.2 | 0.2 | 0.6 | 0.4 | 0.6 | 0.4 | 0.2 | 0.2 | 5 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.8 | 1.789924 | 1.010076 |
Multi-Agent Cooperation and the Emergence of (Natural) Language | The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in- terested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communi- cation. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message to the receiver, while the receiver must rely on it to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore whether the “word meanings” induced in the game reflect intuitive semantic properties of the objects depicted in the image, and we present a simple strategy for grounding the agents’ code into natural language, a necessary step in developing machines that should eventually be able to communicate with humans. | To train natural language systems by putting multiple agents within an interactive referential communication game is very nice. As the authors mention, there has been some (although seemingly not much) previous work on using multi-agent games to teach communication, and it certainly seems like a direction worth pursuing. Moreover, the approach of switching between these games and some supervised learning, as in the experiment described in Section 5 and suggested in Section 6, seems particularly fruitful. Note: For “clarity”, I believe some of the network connections in Fig 1 have been omitted. However, given the rather highly-customized architecture and the slightly hard-to-follow description in Section 3, the shorthand diagram only adds to the confusion. The diagram probably needs to be fine-tuned, but at the very least (especially if I am misunderstanding it!), a caption must [still] be added to help the reader interpret the figure. Overall, the framework (Section 2) is great and seems quite effective/useful in various ways, the results are reasonable, and I expect there will be some interesting future variations on this work as well. Caveat: While I am quite confident I understood the paper (as per confidence score below), I do not feel I am sufficiently familiar with the most closely related literature to accurately assess the place of this work within that context. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMulti-Agent Cooperation and the Emergence of (Natural) Language\nThe current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in- terested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communi- cation. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message to the receiver, while the receiver must rely on it to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore whether the “word meanings” induced in the game reflect intuitive semantic properties of the objects depicted in the image, and we present a simple strategy for grounding the agents’ code into natural language, a necessary step in developing machines that should eventually be able to communicate with humans.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nTo train natural language systems by putting multiple agents within an interactive referential communication game is very nice. As the authors mention, there has been some (although seemingly not much) previous work on using multi-agent games to teach communication, and it certainly seems like a direction worth pursuing. Moreover, the approach of switching between these games and some supervised learning, as in the experiment described in Section 5 and suggested in Section 6, seems particularly fruitful. Note: For “clarity”, I believe some of the network connections in Fig 1 have been omitted. However, given the rather highly-customized architecture and the slightly hard-to-follow description in Section 3, the shorthand diagram only adds to the confusion. The diagram probably needs to be fine-tuned, but at the very least (especially if I am misunderstanding it!), a caption must [still] be added to help the reader interpret the figure. Overall, the framework (Section 2) is great and seems quite effective/useful in various ways, the results are reasonable, and I expect there will be some interesting future variations on this work as well. Caveat: While I am quite confident I understood the paper (as per confidence score below), I do not feel I am sufficiently familiar with the most closely related literature to accurately assess the place of this work within that context.",
"role": "assistant"
}
]
| [
"To train natural language systems by putting multiple agents within an interactive referential communication game is very nice.",
"As the authors mention, there has been some (although seemingly not much) previous work on using multi-agent games to teach communication, and it certainly seems like a direction worth pursuing.",
"Moreover, the approach of switching between these games and some supervised learning, as in the experiment described in Section 5 and suggested in Section 6, seems particularly fruitful.",
"Note: For “clarity”, I believe some of the network connections in Fig 1 have been omitted.",
"However, given the rather highly-customized architecture and the slightly hard-to-follow description in Section 3, the shorthand diagram only adds to the confusion.",
"The diagram probably needs to be fine-tuned, but at the very least (especially if I am misunderstanding it!",
"), a caption must [still] be added to help the reader interpret the figure.",
"Overall, the framework (Section 2) is great and seems quite effective/useful in various ways, the results are reasonable, and I expect there will be some interesting future variations on this work as well.",
"Caveat: While I am quite confident I understood the paper (as per confidence score below), I do not feel I am sufficiently familiar with the most closely related literature to accurately assess the place of this work within that context."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 2,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 9
} | 0.333333 | 0.111111 | 0.333333 | 0.222222 | 0.333333 | 0.555556 | 0.111111 | 0.333333 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.080814 | 0.252519 |
Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models. | This paper presents a set of experiments where via clever use of randomization and noise addition authors demonstrate the enormous modeling (memorization) power of the deep neural networks with large enough capacity. Yet these same models have very good generalization behavior even when all obvious explicit or implicit regularizers are removed. These observations are used to argue that classical theory (VC dimension, Rademacher complexity, uniform stability) is not able to explain the generalization behavior of deep neural networks, necessitating novel theory. This is a very interesting and thought provoking body of work and I am in complete accord with the observations and conclusions of the paper. The classical generalization theory is indeed often at a loss with complex enough model families. As the authors point out, once model families reach a point where they have capacity to memorize train sets, the classical theory does not yield useful results that could give insight into generalization behavior of these models, leaving one to empirical studies and observations. A minor clarification comment: On page 2, “ … true labels were replaced by random labels.” Please state that random labels came from the same set as the true labels, to clarify the experiment. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding deep learning requires rethinking generalization\nDespite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a set of experiments where via clever use of randomization and noise addition authors demonstrate the enormous modeling (memorization) power of the deep neural networks with large enough capacity. Yet these same models have very good generalization behavior even when all obvious explicit or implicit regularizers are removed. These observations are used to argue that classical theory (VC dimension, Rademacher complexity, uniform stability) is not able to explain the generalization behavior of deep neural networks, necessitating novel theory. This is a very interesting and thought provoking body of work and I am in complete accord with the observations and conclusions of the paper. The classical generalization theory is indeed often at a loss with complex enough model families. As the authors point out, once model families reach a point where they have capacity to memorize train sets, the classical theory does not yield useful results that could give insight into generalization behavior of these models, leaving one to empirical studies and observations. A minor clarification comment: On page 2, “ … true labels were replaced by random labels.” Please state that random labels came from the same set as the true labels, to clarify the experiment.",
"role": "assistant"
}
]
| [
"This paper presents a set of experiments where via clever use of randomization and noise addition authors demonstrate the enormous modeling (memorization) power of the deep neural networks with large enough capacity.",
"Yet these same models have very good generalization behavior even when all obvious explicit or implicit regularizers are removed.",
"These observations are used to argue that classical theory (VC dimension, Rademacher complexity, uniform stability) is not able to explain the generalization behavior of deep neural networks, necessitating novel theory.",
"This is a very interesting and thought provoking body of work and I am in complete accord with the observations and conclusions of the paper.",
"The classical generalization theory is indeed often at a loss with complex enough model families.",
"As the authors point out, once model families reach a point where they have capacity to memorize train sets, the classical theory does not yield useful results that could give insight into generalization behavior of these models, leaving one to empirical studies and observations.",
"A minor clarification comment: On page 2, “ … true labels were replaced by random labels.” Please state that random labels came from the same set as the true labels, to clarify the experiment."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 7
} | 0.142857 | 0.142857 | 0.428571 | 0.714286 | 0.428571 | 0.285714 | 0.571429 | 0.142857 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.857143 | 2.288975 | 0.568167 |
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym*s MuJoCo continuous control environments. | The paper proposes using a first-order Taylor expansion as a control variate in policy gradient-style methods. Empirical results in dynamical control tasks suggest that this algorithm reduces the sample complexity, while the theoretical results presented suggest the algorithm is unbiased but of lower variance. The use of control variates is very important and the present paper is an interesting approach in this direction. I am not fully convinced of the approach, because it is one of many possible, and the theoretical analysis relies on an approximation of the variance rather than exact calculations, which makes it less compelling. However, this paper is a step in the right direction so it is worth accepting. In the experiments, a few things need to be discussed further: - What is the running time of the proposed approach? The computation of the extra terms required looks like it could be expensive. running time comparison in addition to sample comparison should be included - The sensitivity to parameter settings of the proposed algorithm needs to be illustrated in separate graphs, since this is one of the main claims in the paper - It would be nice to have a toy example included in which one can actually compute exact values and plot learning curves to compare more directly bias and variance. It would especially be nice to do this with a task that includes rare states, which is the case in which variance of other methods (eg importance sampling) really becomes significant. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQ-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic\nModel-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym*s MuJoCo continuous control environments.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes using a first-order Taylor expansion as a control variate in policy gradient-style methods. Empirical results in dynamical control tasks suggest that this algorithm reduces the sample complexity, while the theoretical results presented suggest the algorithm is unbiased but of lower variance. The use of control variates is very important and the present paper is an interesting approach in this direction. I am not fully convinced of the approach, because it is one of many possible, and the theoretical analysis relies on an approximation of the variance rather than exact calculations, which makes it less compelling. However, this paper is a step in the right direction so it is worth accepting. In the experiments, a few things need to be discussed further: - What is the running time of the proposed approach? The computation of the extra terms required looks like it could be expensive. running time comparison in addition to sample comparison should be included - The sensitivity to parameter settings of the proposed algorithm needs to be illustrated in separate graphs, since this is one of the main claims in the paper - It would be nice to have a toy example included in which one can actually compute exact values and plot learning curves to compare more directly bias and variance. It would especially be nice to do this with a task that includes rare states, which is the case in which variance of other methods (eg importance sampling) really becomes significant.",
"role": "assistant"
}
]
| [
"The paper proposes using a first-order Taylor expansion as a control variate in policy gradient-style methods.",
"Empirical results in dynamical control tasks suggest that this algorithm reduces the sample complexity, while the theoretical results presented suggest the algorithm is unbiased but of lower variance.",
"The use of control variates is very important and the present paper is an interesting approach in this direction.",
"I am not fully convinced of the approach, because it is one of many possible, and the theoretical analysis relies on an approximation of the variance rather than exact calculations, which makes it less compelling.",
"However, this paper is a step in the right direction so it is worth accepting.",
"In the experiments, a few things need to be discussed further: - What is the running time of the proposed approach?",
"The computation of the extra terms required looks like it could be expensive.",
"running time comparison in addition to sample comparison should be included - The sensitivity to parameter settings of the proposed algorithm needs to be illustrated in separate graphs, since this is one of the main claims in the paper - It would be nice to have a toy example included in which one can actually compute exact values and plot learning curves to compare more directly bias and variance.",
"It would especially be nice to do this with a task that includes rare states, which is the case in which variance of other methods (eg importance sampling) really becomes significant."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 8,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 5,
"suggestion_and_solution": 4,
"total": 9
} | 0.111111 | 0 | 0.111111 | 0.888889 | 0.111111 | 0.222222 | 0.555556 | 0.444444 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.444444 | 2.191926 | 0.252519 |
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say -- data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions---and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap. | Interesting paper, definitely provides value to the community by discussing why large batch gradient descent does not work too well | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima\nThe stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say -- data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions---and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nInteresting paper, definitely provides value to the community by discussing why large batch gradient descent does not work too well",
"role": "assistant"
}
]
| [
"Interesting paper, definitely provides value to the community by discussing why large batch gradient descent does not work too well"
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 1,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 1
} | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 3 | 0.72733 | 2.27267 |
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data | Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as **teachers** for a **student** model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student*s privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student*s training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning. | This paper addresses the problem of achieving differential privacy in a very general scenario where a set of teachers is trained on disjoint subsets of sensitive data and the student performs prediction based on public data labeled by teachers through noisy voting. I found the approach altogether plausible and very clearly explained by the authors. Adding more discussion of the bound (and its tightness) from Theorem 1 itself would be appreciated. A simple idea of adding perturbation error to the counts, known from differentially-private literature, is nicely re-used by the authors and elegantly applied in a much broader (non-convex setting) and practical context than in a number of differentially-private and other related papers. The generality of the approach, clear improvement over predecessors, and clarity of the writing makes the method worth publishing. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemi-supervised Knowledge Transfer for Deep Learning from Private Training Data\nSome machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as **teachers** for a **student** model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student*s privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student*s training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper addresses the problem of achieving differential privacy in a very general scenario where a set of teachers is trained on disjoint subsets of sensitive data and the student performs prediction based on public data labeled by teachers through noisy voting. I found the approach altogether plausible and very clearly explained by the authors. Adding more discussion of the bound (and its tightness) from Theorem 1 itself would be appreciated. A simple idea of adding perturbation error to the counts, known from differentially-private literature, is nicely re-used by the authors and elegantly applied in a much broader (non-convex setting) and practical context than in a number of differentially-private and other related papers. The generality of the approach, clear improvement over predecessors, and clarity of the writing makes the method worth publishing.",
"role": "assistant"
}
]
| [
"This paper addresses the problem of achieving differential privacy in a very general scenario where a set of teachers is trained on disjoint subsets of sensitive data and the student performs prediction based on public data labeled by teachers through noisy voting.",
"I found the approach altogether plausible and very clearly explained by the authors.",
"Adding more discussion of the bound (and its tightness) from Theorem 1 itself would be appreciated.",
"A simple idea of adding perturbation error to the counts, known from differentially-private literature, is nicely re-used by the authors and elegantly applied in a much broader (non-convex setting) and practical context than in a number of differentially-private and other related papers.",
"The generality of the approach, clear improvement over predecessors, and clarity of the writing makes the method worth publishing."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 3,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 5
} | 0 | 0 | 0.6 | 0.6 | 0.6 | 0.6 | 0.2 | 0.4 | 5 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3 | 1.989924 | 1.010076 |
Amortised MAP Inference for Image Super-resolution | Image super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation, often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible. A more desirable approach would employ Maximum a Posteriori (MAP) inference, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non-trivial, as it requires us to build a model for the image prior from samples. Here we introduce new methods for emph{amortised MAP inference} whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood-trained image prior. Our experiments show that the GAN based approach performs best on real image data. Lastly, we establish a connection between GANs and amortised variational inference as in e.g. variational autoencoders. | The paper presents an amortised MAP estimation method for SR problems. By learning a neural network which learns to project to an affine subspace of SR solutions which are consistent with the LR method the method enables finding propoer solutions with by using a variety of methods: GANs, noise assisted and density assisted optimisation. Results are nicely demonstrated on several datasets. I like the paper all in all, though I feel the writing can be polished by quite a bit and presentation should be made clearer. It was hard to follow at times and considering the subject matter is quite complicated making it clearer would help. Also, I would love to see some more analysis of the resulting the networks - what kind of features to they learn? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAmortised MAP Inference for Image Super-resolution\nImage super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation, often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible. A more desirable approach would employ Maximum a Posteriori (MAP) inference, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non-trivial, as it requires us to build a model for the image prior from samples. Here we introduce new methods for emph{amortised MAP inference} whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood-trained image prior. Our experiments show that the GAN based approach performs best on real image data. Lastly, we establish a connection between GANs and amortised variational inference as in e.g. variational autoencoders.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents an amortised MAP estimation method for SR problems. By learning a neural network which learns to project to an affine subspace of SR solutions which are consistent with the LR method the method enables finding propoer solutions with by using a variety of methods: GANs, noise assisted and density assisted optimisation. Results are nicely demonstrated on several datasets. I like the paper all in all, though I feel the writing can be polished by quite a bit and presentation should be made clearer. It was hard to follow at times and considering the subject matter is quite complicated making it clearer would help. Also, I would love to see some more analysis of the resulting the networks - what kind of features to they learn?",
"role": "assistant"
}
]
| [
"The paper presents an amortised MAP estimation method for SR problems.",
"By learning a neural network which learns to project to an affine subspace of SR solutions which are consistent with the LR method the method enables finding propoer solutions with by using a variety of methods: GANs, noise assisted and density assisted optimisation.",
"Results are nicely demonstrated on several datasets.",
"I like the paper all in all, though I feel the writing can be polished by quite a bit and presentation should be made clearer.",
"It was hard to follow at times and considering the subject matter is quite complicated making it clearer would help.",
"Also, I would love to see some more analysis of the resulting the networks - what kind of features to they learn?"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 6
} | 0.166667 | 0 | 0 | 0.666667 | 0.166667 | 0.333333 | 0.333333 | 0.5 | 6 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.166667 | 1.393328 | 0.773339 |
Stochastic Neural Networks for Hierarchical Reinforcement Learning | Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments show that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks. | Interesting work on hierarchical control, similar to the work of Heess et al. Experiments are strong and manage to complete benchmarks that previous work could not. Analysis of the experiments is a bit on the weaker side. (1) Like other reviewers, I find the use of the term ‘intrinsic’ motivation somewhat inappropriate (mostly because of its current meaning in RL). Pre-training robots with locomotion by rewarding speed (or rewarding grasping for a manipulating arm) is very geared towards the tasks they will later accomplish. The pre-training tasks from Heess et al., while not identical, are similar. (2) The Mutual Information regularization is elegant and works generally well, but does not seem to help in the more complex mazes 1,2 and 3. The authors note this - is there any interpretation or analysis for this result? (3) The factorization between S_agent and S_rest should be clearly detailed in the paper. Duan et al specify S_agent, but for replicability, S_rest should be clearly specified as well - did I miss it? (4) It would be interesting to provide some analysis of the switching behavior of the agent. More generally, some further analysis of the policies (failure modes, effects of switching time on performance) would have been welcome. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nStochastic Neural Networks for Hierarchical Reinforcement Learning\nDeep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments show that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nInteresting work on hierarchical control, similar to the work of Heess et al. Experiments are strong and manage to complete benchmarks that previous work could not. Analysis of the experiments is a bit on the weaker side. (1) Like other reviewers, I find the use of the term ‘intrinsic’ motivation somewhat inappropriate (mostly because of its current meaning in RL). Pre-training robots with locomotion by rewarding speed (or rewarding grasping for a manipulating arm) is very geared towards the tasks they will later accomplish. The pre-training tasks from Heess et al., while not identical, are similar. (2) The Mutual Information regularization is elegant and works generally well, but does not seem to help in the more complex mazes 1,2 and 3. The authors note this - is there any interpretation or analysis for this result? (3) The factorization between S_agent and S_rest should be clearly detailed in the paper. Duan et al specify S_agent, but for replicability, S_rest should be clearly specified as well - did I miss it? (4) It would be interesting to provide some analysis of the switching behavior of the agent. More generally, some further analysis of the policies (failure modes, effects of switching time on performance) would have been welcome.",
"role": "assistant"
}
]
| [
"Interesting work on hierarchical control, similar to the work of Heess et al.",
"Experiments are strong and manage to complete benchmarks that previous work could not.",
"Analysis of the experiments is a bit on the weaker side.",
"(1) Like other reviewers, I find the use of the term ‘intrinsic’ motivation somewhat inappropriate (mostly because of its current meaning in RL).",
"Pre-training robots with locomotion by rewarding speed (or rewarding grasping for a manipulating arm) is very geared towards the tasks they will later accomplish.",
"The pre-training tasks from Heess et al., while not identical, are similar.",
"(2) The Mutual Information regularization is elegant and works generally well, but does not seem to help in the more complex mazes 1,2 and 3.",
"The authors note this - is there any interpretation or analysis for this result?",
"(3) The factorization between S_agent and S_rest should be clearly detailed in the paper.",
"Duan et al specify S_agent, but for replicability, S_rest should be clearly specified as well - did I miss it?",
"(4) It would be interesting to provide some analysis of the switching behavior of the agent.",
"More generally, some further analysis of the policies (failure modes, effects of switching time on performance) would have been welcome."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 4,
"total": 12
} | 0.166667 | 0 | 0.166667 | 0.916667 | 0.25 | 0.25 | 0.166667 | 0.333333 | 12 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 2.234218 | 0.015782 |
Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks. | This paper prunes entire groups of filters in CNN so that they reduce computational cost and at the same time do not result in sparse connectivity. This result is important to speed up and compress neural networks while being able to use standard fully-connected linear algebra routines. The results are a 10% improvements in ResNet-like and ImageNet, which may be also achieved with better design of networks. New networks should have been also compared, but this we know it is time-consuming. A good paper with some useful results. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPruning Filters for Efficient ConvNets\nThe success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper prunes entire groups of filters in CNN so that they reduce computational cost and at the same time do not result in sparse connectivity. This result is important to speed up and compress neural networks while being able to use standard fully-connected linear algebra routines. The results are a 10% improvements in ResNet-like and ImageNet, which may be also achieved with better design of networks. New networks should have been also compared, but this we know it is time-consuming. A good paper with some useful results.",
"role": "assistant"
}
]
| [
"This paper prunes entire groups of filters in CNN so that they reduce computational cost and at the same time do not result in sparse connectivity.",
"This result is important to speed up and compress neural networks while being able to use standard fully-connected linear algebra routines.",
"The results are a 10% improvements in ResNet-like and ImageNet, which may be also achieved with better design of networks.",
"New networks should have been also compared, but this we know it is time-consuming.",
"A good paper with some useful results."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 5
} | 0 | 0 | 0.2 | 0.4 | 0.2 | 0 | 0.8 | 0.6 | 5 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.2 | 1.189924 | 1.010076 |
Multilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses | Developing accurate predictive models of sensory neurons is vital to understanding sensory processing and brain computations. The current standard approach to modeling neurons is to start with simple models and to incrementally add interpretable features. An alternative approach is to start with a more complex model that captures responses accurately, and then probe the fitted model structure to understand the neural computations. Here, we show that a multitask recurrent neural network (RNN) framework provides the flexibility necessary to model complex computations of neurons that cannot be captured by previous methods. Specifically, multilayer recurrent neural networks that share features across neurons outperform generalized linear models (GLMs) in predicting the spiking responses of parasol ganglion cells in the primate retina to natural images. The networks achieve good predictive performance given surprisingly small amounts of experimental training data. Additionally, we present a novel GLM-RNN hybrid model with separate spatial and temporal processing components which provides insights into the aspects of retinal processing better captured by the recurrent neural networks. | This is a clearly written paper with a nice, if straightforward, result: RNNs can be good predictive models of neuron firing rates in the retina. On the one hand, the primary scientific contribution seems to just be to confirm that this approach works. On this particular stimulus locked task the gains from using the RNN seemed relatively modest, and it hasn*t yet taught us anything new about the biology. On the other hand, this (along with the concurrent work of McIntosh et al.) is introducing neural network modeling to a field that isn*t currently using it, and where it should prove very effective. I think it would be very interesting to see the results of applying a framework like this one with LFP and other neurons as input and on a shorter discretization time scale. I suspect followup work building on this proof of concept will be increasingly exciting. Minor comments: Sec 3.2: I didn*t understand the role of the 0.833 ms bins. Use *epoch* throughout, rather than alternating between *epoch* and *pass through data*. Fig. 4 would be better with the x-axis on a log scale. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMultilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses\nDeveloping accurate predictive models of sensory neurons is vital to understanding sensory processing and brain computations. The current standard approach to modeling neurons is to start with simple models and to incrementally add interpretable features. An alternative approach is to start with a more complex model that captures responses accurately, and then probe the fitted model structure to understand the neural computations. Here, we show that a multitask recurrent neural network (RNN) framework provides the flexibility necessary to model complex computations of neurons that cannot be captured by previous methods. Specifically, multilayer recurrent neural networks that share features across neurons outperform generalized linear models (GLMs) in predicting the spiking responses of parasol ganglion cells in the primate retina to natural images. The networks achieve good predictive performance given surprisingly small amounts of experimental training data. Additionally, we present a novel GLM-RNN hybrid model with separate spatial and temporal processing components which provides insights into the aspects of retinal processing better captured by the recurrent neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is a clearly written paper with a nice, if straightforward, result: RNNs can be good predictive models of neuron firing rates in the retina. On the one hand, the primary scientific contribution seems to just be to confirm that this approach works. On this particular stimulus locked task the gains from using the RNN seemed relatively modest, and it hasn*t yet taught us anything new about the biology. On the other hand, this (along with the concurrent work of McIntosh et al.) is introducing neural network modeling to a field that isn*t currently using it, and where it should prove very effective. I think it would be very interesting to see the results of applying a framework like this one with LFP and other neurons as input and on a shorter discretization time scale. I suspect followup work building on this proof of concept will be increasingly exciting. Minor comments: Sec 3.2: I didn*t understand the role of the 0.833 ms bins. Use *epoch* throughout, rather than alternating between *epoch* and *pass through data*. Fig. 4 would be better with the x-axis on a log scale.",
"role": "assistant"
}
]
| [
"This is a clearly written paper with a nice, if straightforward, result: RNNs can be good predictive models of neuron firing rates in the retina.",
"On the one hand, the primary scientific contribution seems to just be to confirm that this approach works.",
"On this particular stimulus locked task the gains from using the RNN seemed relatively modest, and it hasn*t yet taught us anything new about the biology.",
"On the other hand, this (along with the concurrent work of McIntosh et al.)",
"is introducing neural network modeling to a field that isn*t currently using it, and where it should prove very effective.",
"I think it would be very interesting to see the results of applying a framework like this one with LFP and other neurons as input and on a shorter discretization time scale.",
"I suspect followup work building on this proof of concept will be increasingly exciting.",
"Minor comments: Sec 3.2: I didn*t understand the role of the 0.833 ms bins.",
"Use *epoch* throughout, rather than alternating between *epoch* and *pass through data*.",
"Fig.",
"4 would be better with the x-axis on a log scale."
] | {
"criticism": 2,
"example": 2,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 4,
"total": 11
} | 0.181818 | 0.181818 | 0.363636 | 0.636364 | 0.181818 | 0.181818 | 0.181818 | 0.363636 | 11 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.272727 | 2.209598 | 0.06313 |
Efficient Vector Representation for Documents through Corruption | We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time. | This paper discusses a method for computing vector representations for documents by using a skip-gram style learning mechanism with an added regularizer in the form of a global context vector with various bits of drop out. While none of the individual components proposed in this paper are new, I believe that the combination in this fashion is. Further, I appreciated the detailed analysis of model behaviour in section 3. The main downside to this submission is in its relative weakness on the empirical front. Arguably there are more interesting tasks than sentiment analysis and k-way classification! Likewise, why waste 2/3 of a page on t-sne projections rather than use that space for further analysis? While I am a bit disappointed by this reduced evaluation and agree with the other reviewers concerning soft baselines, I think this paper should be accepted: it*s an interesting algorithm, nicely composed and very efficient, so it*s reasonable to assume that other readers might have use for some of the ideas presented here. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEfficient Vector Representation for Documents through Corruption\nWe present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper discusses a method for computing vector representations for documents by using a skip-gram style learning mechanism with an added regularizer in the form of a global context vector with various bits of drop out. While none of the individual components proposed in this paper are new, I believe that the combination in this fashion is. Further, I appreciated the detailed analysis of model behaviour in section 3. The main downside to this submission is in its relative weakness on the empirical front. Arguably there are more interesting tasks than sentiment analysis and k-way classification! Likewise, why waste 2/3 of a page on t-sne projections rather than use that space for further analysis? While I am a bit disappointed by this reduced evaluation and agree with the other reviewers concerning soft baselines, I think this paper should be accepted: it*s an interesting algorithm, nicely composed and very efficient, so it*s reasonable to assume that other readers might have use for some of the ideas presented here.",
"role": "assistant"
}
]
| [
"This paper discusses a method for computing vector representations for documents by using a skip-gram style learning mechanism with an added regularizer in the form of a global context vector with various bits of drop out.",
"While none of the individual components proposed in this paper are new, I believe that the combination in this fashion is.",
"Further, I appreciated the detailed analysis of model behaviour in section 3.",
"The main downside to this submission is in its relative weakness on the empirical front.",
"Arguably there are more interesting tasks than sentiment analysis and k-way classification!",
"Likewise, why waste 2/3 of a page on t-sne projections rather than use that space for further analysis?",
"While I am a bit disappointed by this reduced evaluation and agree with the other reviewers concerning soft baselines, I think this paper should be accepted: it*s an interesting algorithm, nicely composed and very efficient, so it*s reasonable to assume that other readers might have use for some of the ideas presented here."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 7
} | 0.142857 | 0.142857 | 0.285714 | 0.857143 | 0.428571 | 0.142857 | 0 | 0.285714 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 2.285714 | 1.717547 | 0.568167 |
Efficient Vector Representation for Documents through Corruption | We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time. | This paper proposes learning document embeddings as a sum of the constituent word embeddings, which are jointly learned and randomly dropped out (*corrupted*) during training. While none of the pieces of this model are particularly novel, the result is an efficient learning algorithm for document representation with good empirical performance. Joint training of word and document embeddings is not a new idea, nor is the idea of enforcing the document to be represented by the sum of its word embeddings (see, e.g. *“The Sum of Its Parts”: Joint Learning of Word and Phrase Representations with Autoencoders* by Lebret and Collobert). Furthermore, the corruption mechanism is nothing other than traditional dropout on the input layer. Coupled with the word2vec-style loss and training methods, this paper offers little on the novelty front. On the other hand, it is very efficient at generation time, requiring only an average of the word embeddings rather than a complicated inference step as in Doc2Vec. Moreover, by construction, the embedding captures salient global information about the document -- it captures specifically that information that aids in local-context prediction. For such a simple model, the performance on sentiment analysis and document classification is quite encouraging. Overall, despite the lack of novelty, the simplicity, efficiency, and performance of this model make it worthy of wider readership and study, and I recommend acceptance. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEfficient Vector Representation for Documents through Corruption\nWe present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes learning document embeddings as a sum of the constituent word embeddings, which are jointly learned and randomly dropped out (*corrupted*) during training. While none of the pieces of this model are particularly novel, the result is an efficient learning algorithm for document representation with good empirical performance. Joint training of word and document embeddings is not a new idea, nor is the idea of enforcing the document to be represented by the sum of its word embeddings (see, e.g. *“The Sum of Its Parts”: Joint Learning of Word and Phrase Representations with Autoencoders* by Lebret and Collobert). Furthermore, the corruption mechanism is nothing other than traditional dropout on the input layer. Coupled with the word2vec-style loss and training methods, this paper offers little on the novelty front. On the other hand, it is very efficient at generation time, requiring only an average of the word embeddings rather than a complicated inference step as in Doc2Vec. Moreover, by construction, the embedding captures salient global information about the document -- it captures specifically that information that aids in local-context prediction. For such a simple model, the performance on sentiment analysis and document classification is quite encouraging. Overall, despite the lack of novelty, the simplicity, efficiency, and performance of this model make it worthy of wider readership and study, and I recommend acceptance.",
"role": "assistant"
}
]
| [
"This paper proposes learning document embeddings as a sum of the constituent word embeddings, which are jointly learned and randomly dropped out (*corrupted*) during training.",
"While none of the pieces of this model are particularly novel, the result is an efficient learning algorithm for document representation with good empirical performance.",
"Joint training of word and document embeddings is not a new idea, nor is the idea of enforcing the document to be represented by the sum of its word embeddings (see, e.g.",
"*“The Sum of Its Parts”: Joint Learning of Word and Phrase Representations with Autoencoders* by Lebret and Collobert).",
"Furthermore, the corruption mechanism is nothing other than traditional dropout on the input layer.",
"Coupled with the word2vec-style loss and training methods, this paper offers little on the novelty front.",
"On the other hand, it is very efficient at generation time, requiring only an average of the word embeddings rather than a complicated inference step as in Doc2Vec.",
"Moreover, by construction, the embedding captures salient global information about the document -- it captures specifically that information that aids in local-context prediction.",
"For such a simple model, the performance on sentiment analysis and document classification is quite encouraging.",
"Overall, despite the lack of novelty, the simplicity, efficiency, and performance of this model make it worthy of wider readership and study, and I recommend acceptance."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 10
} | 0.3 | 0 | 0.3 | 0.6 | 0.3 | 0.5 | 0.1 | 0.1 | 10 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 2.057958 | 0.142042 |
Transfer of View-manifold Learning to Similarity Perception of Novel Objects | We develop a model of perceptual similarity judgment based on re-training a deep convolution neural network (DCNN) that learns to associate different views of each 3D object to capture the notion of object persistence and continuity in our visual experience. The re-training process effectively performs distance metric learning under the object persistency constraints, to modify the view-manifold of object representations. It reduces the effective distance between the representations of different views of the same object without compromising the distance between those of the views of different objects, resulting in the untangling of the view-manifolds between individual objects within the same category and across categories. This untangling enables the model to discriminate and recognize objects within the same category, independent of viewpoints. We found that this ability is not limited to the trained objects, but transfers to novel objects in both trained and untrained categories, as well as to a variety of completely novel artificial synthetic objects. This transfer in learning suggests the modification of distance metrics in view- manifolds is more general and abstract, likely at the levels of parts, and independent of the specific objects or categories experienced during training. Interestingly, the resulting transformation of feature representation in the deep networks is found to significantly better match human perceptual similarity judgment than AlexNet, suggesting that object persistence could be an important constraint in the development of perceptual similarity judgment in biological neural networks. | On one hand this paper is fairly standard in that it uses deep metric learning with a Siamese architecture. On the other, the connections to human perception involving persistence is quite interesting. I*m not an expert in human vision, but the comparison in general and the induced hierarchical groupings in particular seem like something that should interest people in this community. The experimental suite is ok but I was disappointed that it is 100% synthetic. The authors could have used a minimally viable real dataset such as ALOI http://aloi.science.uva.nl . In summary, the mechanics of the proposed approach are not new, but the findings about the transfer of similarity judgement to novel object classes are interesting. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTransfer of View-manifold Learning to Similarity Perception of Novel Objects\nWe develop a model of perceptual similarity judgment based on re-training a deep convolution neural network (DCNN) that learns to associate different views of each 3D object to capture the notion of object persistence and continuity in our visual experience. The re-training process effectively performs distance metric learning under the object persistency constraints, to modify the view-manifold of object representations. It reduces the effective distance between the representations of different views of the same object without compromising the distance between those of the views of different objects, resulting in the untangling of the view-manifolds between individual objects within the same category and across categories. This untangling enables the model to discriminate and recognize objects within the same category, independent of viewpoints. We found that this ability is not limited to the trained objects, but transfers to novel objects in both trained and untrained categories, as well as to a variety of completely novel artificial synthetic objects. This transfer in learning suggests the modification of distance metrics in view- manifolds is more general and abstract, likely at the levels of parts, and independent of the specific objects or categories experienced during training. Interestingly, the resulting transformation of feature representation in the deep networks is found to significantly better match human perceptual similarity judgment than AlexNet, suggesting that object persistence could be an important constraint in the development of perceptual similarity judgment in biological neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nOn one hand this paper is fairly standard in that it uses deep metric learning with a Siamese architecture. On the other, the connections to human perception involving persistence is quite interesting. I*m not an expert in human vision, but the comparison in general and the induced hierarchical groupings in particular seem like something that should interest people in this community. The experimental suite is ok but I was disappointed that it is 100% synthetic. The authors could have used a minimally viable real dataset such as ALOI http://aloi.science.uva.nl . In summary, the mechanics of the proposed approach are not new, but the findings about the transfer of similarity judgement to novel object classes are interesting.",
"role": "assistant"
}
]
| [
"On one hand this paper is fairly standard in that it uses deep metric learning with a Siamese architecture.",
"On the other, the connections to human perception involving persistence is quite interesting.",
"I*m not an expert in human vision, but the comparison in general and the induced hierarchical groupings in particular seem like something that should interest people in this community.",
"The experimental suite is ok but I was disappointed that it is 100% synthetic.",
"The authors could have used a minimally viable real dataset such as ALOI http://aloi.science.uva.nl .",
"In summary, the mechanics of the proposed approach are not new, but the findings about the transfer of similarity judgement to novel object classes are interesting."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 6
} | 0.166667 | 0 | 0.5 | 0.833333 | 0.333333 | 0 | 0.166667 | 0.166667 | 6 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.166667 | 1.393328 | 0.773339 |
What does it take to generate natural textures? | Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters. | This paper provides an interesting analysis of the conditions which enable generation of natural looking textures. The results is quite surprising, and analysis is quite thorough. I do think the evaluation methods require more work, but as other reviewers mentioned this could be an interesting line of work moving forwards and does not take too much from this current paper which, I think, should be accepted. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWhat does it take to generate natural textures?\nNatural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper provides an interesting analysis of the conditions which enable generation of natural looking textures. The results is quite surprising, and analysis is quite thorough. I do think the evaluation methods require more work, but as other reviewers mentioned this could be an interesting line of work moving forwards and does not take too much from this current paper which, I think, should be accepted.",
"role": "assistant"
}
]
| [
"This paper provides an interesting analysis of the conditions which enable generation of natural looking textures.",
"The results is quite surprising, and analysis is quite thorough.",
"I do think the evaluation methods require more work, but as other reviewers mentioned this could be an interesting line of work moving forwards and does not take too much from this current paper which, I think, should be accepted."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 3,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0.333333 | 0 | 0.666667 | 1 | 1 | 0 | 0.333333 | 0.333333 | 3 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 3.666667 | 2.088424 | 1.578243 |
What does it take to generate natural textures? | Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters. | The framework of Gatys et al. demonstrated that correlation statistics (empirical Gram matrices) of deep feature responses provide an excellent characterisation of visual textures. This paper investigates in detail which kind of deep or shallow networks may work well in this framework. One of the main findings is that that very shallow nets, consisting of a single filter bank with random weights, work surprisingly well, and for simple and regular textures may produce results which are visually superior to complex data-adapted filters such as the ones in networks like VGG-19. More broadly, the paper contains an interesting and informative discussion on the strength and limitations on such methods for texture synthesis. Figure 4 shows that the optimisation of images with respect to shallow filter banks may result in texture images that have a lower VGG-19 loss than optimising the VGG-19 objective directly. This is imputed to the difficulty of optimising the highly non-linear VGG-19 cost function, which is a reasonable explanation. In the new supplementary material, the authors show that better optimisation results can be obtained by initialising the VGG-19-based optimisation with the shallow network optimisation results, which is a useful complement to the original experiments. The main limitation of the paper is that it does not systematically compare different methods against a quantifiable objective. It is trivial to define image statistics that would allow to simply generate an exact copy of any reference texture, hence with very good visual quality. Such trivial statistics would also be very shallow. The aim is instead to capture a texture distribution, and measuring how well a method meets this challenge remains an open problem. Hence, while the empirical results seem to confirm the intuition that simple statistics are good enough for texture synthesis both in terms of quality and diversity (when compared to more complex statistics), it is difficult to conclusively confirm that this is the case. The authors indicate that diversity could be measured in terms of entropy. This is reasonable, but, as they acknowledge in their answers to questions, difficult to do in practice. Furthermore, this would still not account for the other aspect of the problem, namely visual quality. They also suggest to perform a psychophysical assessment, which may be the only practical way of addressing this problem, but deem that to be material for future work. Overall, since evaluation of image generation is such an hard problem, I think the paper still has sufficient strengths to warrant publication in ICLR. Still, some form of psychophysical assessment would be useful to confirm the intuitions that, at present, can only be obtained by inspecting the figure sin the paper and in the supplementary material. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWhat does it take to generate natural textures?\nNatural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe framework of Gatys et al. demonstrated that correlation statistics (empirical Gram matrices) of deep feature responses provide an excellent characterisation of visual textures. This paper investigates in detail which kind of deep or shallow networks may work well in this framework. One of the main findings is that that very shallow nets, consisting of a single filter bank with random weights, work surprisingly well, and for simple and regular textures may produce results which are visually superior to complex data-adapted filters such as the ones in networks like VGG-19. More broadly, the paper contains an interesting and informative discussion on the strength and limitations on such methods for texture synthesis. Figure 4 shows that the optimisation of images with respect to shallow filter banks may result in texture images that have a lower VGG-19 loss than optimising the VGG-19 objective directly. This is imputed to the difficulty of optimising the highly non-linear VGG-19 cost function, which is a reasonable explanation. In the new supplementary material, the authors show that better optimisation results can be obtained by initialising the VGG-19-based optimisation with the shallow network optimisation results, which is a useful complement to the original experiments. The main limitation of the paper is that it does not systematically compare different methods against a quantifiable objective. It is trivial to define image statistics that would allow to simply generate an exact copy of any reference texture, hence with very good visual quality. Such trivial statistics would also be very shallow. The aim is instead to capture a texture distribution, and measuring how well a method meets this challenge remains an open problem. Hence, while the empirical results seem to confirm the intuition that simple statistics are good enough for texture synthesis both in terms of quality and diversity (when compared to more complex statistics), it is difficult to conclusively confirm that this is the case. The authors indicate that diversity could be measured in terms of entropy. This is reasonable, but, as they acknowledge in their answers to questions, difficult to do in practice. Furthermore, this would still not account for the other aspect of the problem, namely visual quality. They also suggest to perform a psychophysical assessment, which may be the only practical way of addressing this problem, but deem that to be material for future work. Overall, since evaluation of image generation is such an hard problem, I think the paper still has sufficient strengths to warrant publication in ICLR. Still, some form of psychophysical assessment would be useful to confirm the intuitions that, at present, can only be obtained by inspecting the figure sin the paper and in the supplementary material.",
"role": "assistant"
}
]
| [
"The framework of Gatys et al.",
"demonstrated that correlation statistics (empirical Gram matrices) of deep feature responses provide an excellent characterisation of visual textures.",
"This paper investigates in detail which kind of deep or shallow networks may work well in this framework.",
"One of the main findings is that that very shallow nets, consisting of a single filter bank with random weights, work surprisingly well, and for simple and regular textures may produce results which are visually superior to complex data-adapted filters such as the ones in networks like VGG-19.",
"More broadly, the paper contains an interesting and informative discussion on the strength and limitations on such methods for texture synthesis.",
"Figure 4 shows that the optimisation of images with respect to shallow filter banks may result in texture images that have a lower VGG-19 loss than optimising the VGG-19 objective directly.",
"This is imputed to the difficulty of optimising the highly non-linear VGG-19 cost function, which is a reasonable explanation.",
"In the new supplementary material, the authors show that better optimisation results can be obtained by initialising the VGG-19-based optimisation with the shallow network optimisation results, which is a useful complement to the original experiments.",
"The main limitation of the paper is that it does not systematically compare different methods against a quantifiable objective.",
"It is trivial to define image statistics that would allow to simply generate an exact copy of any reference texture, hence with very good visual quality.",
"Such trivial statistics would also be very shallow.",
"The aim is instead to capture a texture distribution, and measuring how well a method meets this challenge remains an open problem.",
"Hence, while the empirical results seem to confirm the intuition that simple statistics are good enough for texture synthesis both in terms of quality and diversity (when compared to more complex statistics), it is difficult to conclusively confirm that this is the case.",
"The authors indicate that diversity could be measured in terms of entropy.",
"This is reasonable, but, as they acknowledge in their answers to questions, difficult to do in practice.",
"Furthermore, this would still not account for the other aspect of the problem, namely visual quality.",
"They also suggest to perform a psychophysical assessment, which may be the only practical way of addressing this problem, but deem that to be material for future work.",
"Overall, since evaluation of image generation is such an hard problem, I think the paper still has sufficient strengths to warrant publication in ICLR.",
"Still, some form of psychophysical assessment would be useful to confirm the intuitions that, at present, can only be obtained by inspecting the figure sin the paper and in the supplementary material."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 6,
"materials_and_methods": 16,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 7,
"suggestion_and_solution": 4,
"total": 19
} | 0.210526 | 0.052632 | 0.315789 | 0.842105 | 0.210526 | 0.052632 | 0.368421 | 0.210526 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.263158 | 1.69499 | 0.568167 |
Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution during the early phases of training, thus providing a unified solution to the missing modes problem. | This paper does a good job of clearly articulating a problem in contemporary training of GANs, coming up with an intuitive solution via regularizers in addition to optimizing only the discriminator score, and conducting clever experiments to show that the regularizers have the intended effect. There are recent related and improved GAN variants (ALI, VAEGAN, potentially others), which are included in qualitative comparisons, but not quantitative. It would be interesting to see whether these other types of modified GANs already make some progress in addressing the missing modes problem. If code is available for those methods, the paper could be strengthened a lot by running the mode-missing benchmarks on them (even if it turns out that a *competing* method can get a better result in some cases). The experiments on digits and faces are good for validating the proposed regularizers. However, if the authors can show better results on CIFAR-10, ImageNet, MS-COCO or some other more diverse and challenging dataset, I would be more convinced of the value of the proposed method. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMode Regularized Generative Adversarial Networks\nAlthough Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution during the early phases of training, thus providing a unified solution to the missing modes problem.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper does a good job of clearly articulating a problem in contemporary training of GANs, coming up with an intuitive solution via regularizers in addition to optimizing only the discriminator score, and conducting clever experiments to show that the regularizers have the intended effect. There are recent related and improved GAN variants (ALI, VAEGAN, potentially others), which are included in qualitative comparisons, but not quantitative. It would be interesting to see whether these other types of modified GANs already make some progress in addressing the missing modes problem. If code is available for those methods, the paper could be strengthened a lot by running the mode-missing benchmarks on them (even if it turns out that a *competing* method can get a better result in some cases). The experiments on digits and faces are good for validating the proposed regularizers. However, if the authors can show better results on CIFAR-10, ImageNet, MS-COCO or some other more diverse and challenging dataset, I would be more convinced of the value of the proposed method.",
"role": "assistant"
}
]
| [
"This paper does a good job of clearly articulating a problem in contemporary training of GANs, coming up with an intuitive solution via regularizers in addition to optimizing only the discriminator score, and conducting clever experiments to show that the regularizers have the intended effect.",
"There are recent related and improved GAN variants (ALI, VAEGAN, potentially others), which are included in qualitative comparisons, but not quantitative.",
"It would be interesting to see whether these other types of modified GANs already make some progress in addressing the missing modes problem.",
"If code is available for those methods, the paper could be strengthened a lot by running the mode-missing benchmarks on them (even if it turns out that a *competing* method can get a better result in some cases).",
"The experiments on digits and faces are good for validating the proposed regularizers.",
"However, if the authors can show better results on CIFAR-10, ImageNet, MS-COCO or some other more diverse and challenging dataset, I would be more convinced of the value of the proposed method."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 6
} | 0 | 0 | 0 | 1 | 0.333333 | 0.166667 | 0.5 | 0.5 | 6 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.5 | 1.726661 | 0.773339 |
Highway and Residual Networks learn Unrolled Iterative Estimation | The past year saw the introduction of new architectures such as Highway networks and Residual networks which, for the first time, enabled the training of feedforward networks with dozens to hundreds of layers using simple gradient descent. While depth of representation has been posited as a primary reason for their success, there are indications that these architectures defy a popular view of deep learning as a hierarchical computation of increasingly abstract features at each layer. In this report, we argue that this view is incomplete and does not adequately explain several recent findings. We propose an alternative viewpoint based on unrolled iterative estimation---a group of successive layers iteratively refine their estimates of the same features instead of computing an entirely new representation. We demonstrate that this viewpoint directly leads to the construction of highway and residual networks. Finally we provide preliminary experiments to discuss the similarities and differences between the two architectures. | The paper describes an alternative view on hierarchical feature representations in deep neural networks. The viewpoint of refining representations is well motivated and is in agreement with the success of recent model structures like ResNets. Pros: - Good motivation for the effectiveness of ResNets and Highway networks - Convincing analysis and evaluation Cons: - The effect of this finding of the interpretation of batch-normalization is only captured briefly but seems to be significant - Explanation of findings in (Zeiler & Fergus (2014)) using UIE viewpoint missing Remarks: - Missing word in line 223: *that it *is* valid* | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHighway and Residual Networks learn Unrolled Iterative Estimation\nThe past year saw the introduction of new architectures such as Highway networks and Residual networks which, for the first time, enabled the training of feedforward networks with dozens to hundreds of layers using simple gradient descent. While depth of representation has been posited as a primary reason for their success, there are indications that these architectures defy a popular view of deep learning as a hierarchical computation of increasingly abstract features at each layer. In this report, we argue that this view is incomplete and does not adequately explain several recent findings. We propose an alternative viewpoint based on unrolled iterative estimation---a group of successive layers iteratively refine their estimates of the same features instead of computing an entirely new representation. We demonstrate that this viewpoint directly leads to the construction of highway and residual networks. Finally we provide preliminary experiments to discuss the similarities and differences between the two architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper describes an alternative view on hierarchical feature representations in deep neural networks. The viewpoint of refining representations is well motivated and is in agreement with the success of recent model structures like ResNets. Pros: - Good motivation for the effectiveness of ResNets and Highway networks - Convincing analysis and evaluation Cons: - The effect of this finding of the interpretation of batch-normalization is only captured briefly but seems to be significant - Explanation of findings in (Zeiler & Fergus (2014)) using UIE viewpoint missing Remarks: - Missing word in line 223: *that it *is* valid*",
"role": "assistant"
}
]
| [
"The paper describes an alternative view on hierarchical feature representations in deep neural networks.",
"The viewpoint of refining representations is well motivated and is in agreement with the success of recent model structures like ResNets.",
"Pros: - Good motivation for the effectiveness of ResNets and Highway networks - Convincing analysis and evaluation Cons: - The effect of this finding of the interpretation of batch-normalization is only captured briefly but seems to be significant - Explanation of findings in (Zeiler & Fergus (2014)) using UIE viewpoint missing Remarks: - Missing word in line 223: *that it *is* valid*"
] | {
"criticism": 0,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 3
} | 0 | 0.333333 | 0.333333 | 0.666667 | 0.666667 | 0.333333 | 0.333333 | 0 | 3 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.666667 | 1.088424 | 1.578243 |
Highway and Residual Networks learn Unrolled Iterative Estimation | The past year saw the introduction of new architectures such as Highway networks and Residual networks which, for the first time, enabled the training of feedforward networks with dozens to hundreds of layers using simple gradient descent. While depth of representation has been posited as a primary reason for their success, there are indications that these architectures defy a popular view of deep learning as a hierarchical computation of increasingly abstract features at each layer. In this report, we argue that this view is incomplete and does not adequately explain several recent findings. We propose an alternative viewpoint based on unrolled iterative estimation---a group of successive layers iteratively refine their estimates of the same features instead of computing an entirely new representation. We demonstrate that this viewpoint directly leads to the construction of highway and residual networks. Finally we provide preliminary experiments to discuss the similarities and differences between the two architectures. | This paper provides a new perspective to understanding the ResNet and Highway net. The new perspective assumes that the blocks inside the networks with residual or skip-connection are groups of successive layers with the same hidden size, which performs to iteratively refine their estimates of the same feature instead of generate new representations. Under this perspective, some contradictories with the traditional representation view induced by ResNet and Highway network and other paper can be well explained. The pros of the paper are: 1. A novel perspective to understand the recent progress of neural network is proposed. 2. The paper provides a quantitatively experimentals to compare ResNet and Highway net, and shows contradict results with several claims from previous work. The authors also give discussions and explanations about the contradictories, which provides a good insight of the disadvantages and advantages between these two kind of networks. The main cons of the paper is that the experiments are not sufficient. For example, since the main contribution of the paper is to propose the “unrolled iterative estimation* and the stage 4 of Figure 3 seems not follow the assumption of *unrolled iterative estimation* and the authors says: *We note that stage four (with three blocks) appears to be underestimating the representation values, indicating a probable weak link in the architecture.*. Thus, it would be much better to do experiments to show that under some condition, the performance of stage 4 can follow the assumption. Moreover, the paper should provide more experiments to show the evidence of *unrolled iterative estimation*, not comparing ResNet with Highway Net. The lack of experiments on this point is the main concern from myself. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHighway and Residual Networks learn Unrolled Iterative Estimation\nThe past year saw the introduction of new architectures such as Highway networks and Residual networks which, for the first time, enabled the training of feedforward networks with dozens to hundreds of layers using simple gradient descent. While depth of representation has been posited as a primary reason for their success, there are indications that these architectures defy a popular view of deep learning as a hierarchical computation of increasingly abstract features at each layer. In this report, we argue that this view is incomplete and does not adequately explain several recent findings. We propose an alternative viewpoint based on unrolled iterative estimation---a group of successive layers iteratively refine their estimates of the same features instead of computing an entirely new representation. We demonstrate that this viewpoint directly leads to the construction of highway and residual networks. Finally we provide preliminary experiments to discuss the similarities and differences between the two architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper provides a new perspective to understanding the ResNet and Highway net. The new perspective assumes that the blocks inside the networks with residual or skip-connection are groups of successive layers with the same hidden size, which performs to iteratively refine their estimates of the same feature instead of generate new representations. Under this perspective, some contradictories with the traditional representation view induced by ResNet and Highway network and other paper can be well explained. The pros of the paper are: 1. A novel perspective to understand the recent progress of neural network is proposed. 2. The paper provides a quantitatively experimentals to compare ResNet and Highway net, and shows contradict results with several claims from previous work. The authors also give discussions and explanations about the contradictories, which provides a good insight of the disadvantages and advantages between these two kind of networks. The main cons of the paper is that the experiments are not sufficient. For example, since the main contribution of the paper is to propose the “unrolled iterative estimation* and the stage 4 of Figure 3 seems not follow the assumption of *unrolled iterative estimation* and the authors says: *We note that stage four (with three blocks) appears to be underestimating the representation values, indicating a probable weak link in the architecture.*. Thus, it would be much better to do experiments to show that under some condition, the performance of stage 4 can follow the assumption. Moreover, the paper should provide more experiments to show the evidence of *unrolled iterative estimation*, not comparing ResNet with Highway Net. The lack of experiments on this point is the main concern from myself.",
"role": "assistant"
}
]
| [
"This paper provides a new perspective to understanding the ResNet and Highway net.",
"The new perspective assumes that the blocks inside the networks with residual or skip-connection are groups of successive layers with the same hidden size, which performs to iteratively refine their estimates of the same feature instead of generate new representations.",
"Under this perspective, some contradictories with the traditional representation view induced by ResNet and Highway network and other paper can be well explained.",
"The pros of the paper are: 1.",
"A novel perspective to understand the recent progress of neural network is proposed.",
"2.",
"The paper provides a quantitatively experimentals to compare ResNet and Highway net, and shows contradict results with several claims from previous work.",
"The authors also give discussions and explanations about the contradictories, which provides a good insight of the disadvantages and advantages between these two kind of networks.",
"The main cons of the paper is that the experiments are not sufficient.",
"For example, since the main contribution of the paper is to propose the “unrolled iterative estimation* and the stage 4 of Figure 3 seems not follow the assumption of *unrolled iterative estimation* and the authors says: *We note that stage four (with three blocks) appears to be underestimating the representation values, indicating a probable weak link in the architecture.*.",
"Thus, it would be much better to do experiments to show that under some condition, the performance of stage 4 can follow the assumption.",
"Moreover, the paper should provide more experiments to show the evidence of *unrolled iterative estimation*, not comparing ResNet with Highway Net.",
"The lack of experiments on this point is the main concern from myself."
] | {
"criticism": 5,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 13
} | 0.384615 | 0.076923 | 0.307692 | 0.538462 | 0.230769 | 0.076923 | 0.307692 | 0.230769 | 13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.153846 | 2.153846 | 0 |
Improving Neural Language Models with a Continuous Cache | We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks. | The authors present a simple method to affix a cache to neural language models, which provides in effect a copying mechanism from recently used words. Unlike much related work in neural networks with copying mechanisms, this mechanism need not be trained with long-term backpropagation, which makes it efficient and scalable to much larger cache sizes. They demonstrate good improvements on language modeling by adding this cache to RNN baselines. The main contribution of this paper is the observation that simply using the hidden states h_i as keys for words x_i, and h_t as the query vector, naturally gives a lookup mechanism that works fine without tuning by backprop. This is a simple observation and might already exist as folk knowledge among some people, but it has nice implications for scalability and the experiments are convincing. The basic idea of repurposing locally-learned representations for large-scale attention where backprop would normally be prohibitively expensive is an interesting one, and could probably be used to improve other types of memory networks. My main criticism of this work is its simplicity and incrementality when compared to previously existing literature. As a simple modification of existing NLP models, but with good empirical success, simplicity and practicality, it is probably more suitable for an NLP-specific conference. However, I think that approaches that distill recent work into a simple, efficient, applicable form should be rewarded and that this tool will be useful to a large enough portion of the ICLR community to recommend its publication. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nImproving Neural Language Models with a Continuous Cache\nWe propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present a simple method to affix a cache to neural language models, which provides in effect a copying mechanism from recently used words. Unlike much related work in neural networks with copying mechanisms, this mechanism need not be trained with long-term backpropagation, which makes it efficient and scalable to much larger cache sizes. They demonstrate good improvements on language modeling by adding this cache to RNN baselines. The main contribution of this paper is the observation that simply using the hidden states h_i as keys for words x_i, and h_t as the query vector, naturally gives a lookup mechanism that works fine without tuning by backprop. This is a simple observation and might already exist as folk knowledge among some people, but it has nice implications for scalability and the experiments are convincing. The basic idea of repurposing locally-learned representations for large-scale attention where backprop would normally be prohibitively expensive is an interesting one, and could probably be used to improve other types of memory networks. My main criticism of this work is its simplicity and incrementality when compared to previously existing literature. As a simple modification of existing NLP models, but with good empirical success, simplicity and practicality, it is probably more suitable for an NLP-specific conference. However, I think that approaches that distill recent work into a simple, efficient, applicable form should be rewarded and that this tool will be useful to a large enough portion of the ICLR community to recommend its publication.",
"role": "assistant"
}
]
| [
"The authors present a simple method to affix a cache to neural language models, which provides in effect a copying mechanism from recently used words.",
"Unlike much related work in neural networks with copying mechanisms, this mechanism need not be trained with long-term backpropagation, which makes it efficient and scalable to much larger cache sizes.",
"They demonstrate good improvements on language modeling by adding this cache to RNN baselines.",
"The main contribution of this paper is the observation that simply using the hidden states h_i as keys for words x_i, and h_t as the query vector, naturally gives a lookup mechanism that works fine without tuning by backprop.",
"This is a simple observation and might already exist as folk knowledge among some people, but it has nice implications for scalability and the experiments are convincing.",
"The basic idea of repurposing locally-learned representations for large-scale attention where backprop would normally be prohibitively expensive is an interesting one, and could probably be used to improve other types of memory networks.",
"My main criticism of this work is its simplicity and incrementality when compared to previously existing literature.",
"As a simple modification of existing NLP models, but with good empirical success, simplicity and practicality, it is probably more suitable for an NLP-specific conference.",
"However, I think that approaches that distill recent work into a simple, efficient, applicable form should be rewarded and that this tool will be useful to a large enough portion of the ICLR community to recommend its publication."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 6,
"materials_and_methods": 6,
"praise": 5,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 9
} | 0.111111 | 0 | 0.666667 | 0.666667 | 0.555556 | 0.111111 | 0.111111 | 0.222222 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.444444 | 2.191926 | 0.252519 |
Improving Neural Language Models with a Continuous Cache | We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks. | This paper not only shows that a cache model on top of a pre-trained RNN can improve language modeling, but also illustrates a shortcoming of standard RNN models in that they are unable to capture this information themselves. Regardless of whether this is due to the small BPTT window (35 is standard) or an issue with the capability of the RNN itself, this is a useful insight. This technique is an interesting variation of memory augmented neural networks with a number of advantages to many of the standard memory augmented architectures. They illustrate the neural cache model on not just the Penn Treebank but also WikiText-2 and WikiText-103, two datasets specifically tailored to illustrating long term dependencies with a more realistic vocabulary size. I have not seen the ability to refer up to 2000 words back previously. I recommend this paper be accepted. There is additionally extensive analysis of the hyperparameters on these datasets, providing further insight. I recommend this interesting and well analyzed paper be accepted. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nImproving Neural Language Models with a Continuous Cache\nWe propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper not only shows that a cache model on top of a pre-trained RNN can improve language modeling, but also illustrates a shortcoming of standard RNN models in that they are unable to capture this information themselves. Regardless of whether this is due to the small BPTT window (35 is standard) or an issue with the capability of the RNN itself, this is a useful insight. This technique is an interesting variation of memory augmented neural networks with a number of advantages to many of the standard memory augmented architectures. They illustrate the neural cache model on not just the Penn Treebank but also WikiText-2 and WikiText-103, two datasets specifically tailored to illustrating long term dependencies with a more realistic vocabulary size. I have not seen the ability to refer up to 2000 words back previously. I recommend this paper be accepted. There is additionally extensive analysis of the hyperparameters on these datasets, providing further insight. I recommend this interesting and well analyzed paper be accepted.",
"role": "assistant"
}
]
| [
"This paper not only shows that a cache model on top of a pre-trained RNN can improve language modeling, but also illustrates a shortcoming of standard RNN models in that they are unable to capture this information themselves.",
"Regardless of whether this is due to the small BPTT window (35 is standard) or an issue with the capability of the RNN itself, this is a useful insight.",
"This technique is an interesting variation of memory augmented neural networks with a number of advantages to many of the standard memory augmented architectures.",
"They illustrate the neural cache model on not just the Penn Treebank but also WikiText-2 and WikiText-103, two datasets specifically tailored to illustrating long term dependencies with a more realistic vocabulary size.",
"I have not seen the ability to refer up to 2000 words back previously.",
"I recommend this paper be accepted.",
"There is additionally extensive analysis of the hyperparameters on these datasets, providing further insight.",
"I recommend this interesting and well analyzed paper be accepted."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 5,
"presentation_and_reporting": 3,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 8
} | 0.25 | 0 | 0.25 | 0.75 | 0.625 | 0.375 | 0 | 0.25 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 2.5 | 2.105439 | 0.394561 |
Program Synthesis for Character Level Language Modeling | We propose a statistical model applicable to character level language modeling and show that it is a good fit for both, program source code and English text. The model is parameterized by a program from a domain-specific language (DSL) that allows expressing non-trivial data dependencies. Learning is done in two phases: (i) we synthesize a program from the DSL, essentially learning a good representation for the data, and (ii) we learn parameters from the training data - the process is done via counting, as in simple language models such as n-gram. Our experiments show that the precision of our model is comparable to that of neural networks while sharing a number of advantages with n-gram models such as fast query time and the capability to quickly add and remove training data samples. Further, the model is parameterized by a program that can be manually inspected, understood and updated, addressing a major problem of neural networks. | The authors propose a method for language modeling by first generating a program from a DSL, then learning the count-based parameters of that program. Pros include: The proposed method is innovative and highly different from standard LSTM-based approaches of late. The model should also be much quicker to apply at query time. Strong empirical results are obtained on modeling code, though there is some gap between the synthesis method and neural methods on the Hutter task. A detailed description of the language syntax is provided. Cons/suggestions: - The synthesis procedure using MCMC is left very vague, even though being able to make this procedure efficient is one of the key questions. - The work builds on work from the PL literature; surely the related work could also be expanded and this work better put in context. - More compact/convincing examples of human interpretability would be helpful. Other comments - Training time evaluation in Table 1 should give basic information such as whether training was done on GPU/CPU, CPU specs, etc. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nProgram Synthesis for Character Level Language Modeling\nWe propose a statistical model applicable to character level language modeling and show that it is a good fit for both, program source code and English text. The model is parameterized by a program from a domain-specific language (DSL) that allows expressing non-trivial data dependencies. Learning is done in two phases: (i) we synthesize a program from the DSL, essentially learning a good representation for the data, and (ii) we learn parameters from the training data - the process is done via counting, as in simple language models such as n-gram. Our experiments show that the precision of our model is comparable to that of neural networks while sharing a number of advantages with n-gram models such as fast query time and the capability to quickly add and remove training data samples. Further, the model is parameterized by a program that can be manually inspected, understood and updated, addressing a major problem of neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a method for language modeling by first generating a program from a DSL, then learning the count-based parameters of that program. Pros include: The proposed method is innovative and highly different from standard LSTM-based approaches of late. The model should also be much quicker to apply at query time. Strong empirical results are obtained on modeling code, though there is some gap between the synthesis method and neural methods on the Hutter task. A detailed description of the language syntax is provided. Cons/suggestions: - The synthesis procedure using MCMC is left very vague, even though being able to make this procedure efficient is one of the key questions. - The work builds on work from the PL literature; surely the related work could also be expanded and this work better put in context. - More compact/convincing examples of human interpretability would be helpful. Other comments - Training time evaluation in Table 1 should give basic information such as whether training was done on GPU/CPU, CPU specs, etc.",
"role": "assistant"
}
]
| [
"The authors propose a method for language modeling by first generating a program from a DSL, then learning the count-based parameters of that program.",
"Pros include: The proposed method is innovative and highly different from standard LSTM-based approaches of late.",
"The model should also be much quicker to apply at query time.",
"Strong empirical results are obtained on modeling code, though there is some gap between the synthesis method and neural methods on the Hutter task.",
"A detailed description of the language syntax is provided.",
"Cons/suggestions: - The synthesis procedure using MCMC is left very vague, even though being able to make this procedure efficient is one of the key questions.",
"- The work builds on work from the PL literature; surely the related work could also be expanded and this work better put in context.",
"- More compact/convincing examples of human interpretability would be helpful.",
"Other comments - Training time evaluation in Table 1 should give basic information such as whether training was done on GPU/CPU, CPU specs, etc."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 4,
"results_and_discussion": 1,
"suggestion_and_solution": 5,
"total": 9
} | 0.111111 | 0.111111 | 0.222222 | 0.666667 | 0.111111 | 0.444444 | 0.111111 | 0.555556 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.080814 | 0.252519 |
Episodic Exploration for Deep Deterministic Policies for StarCraft Micromanagement | We consider scenarios from the real-time strategy game StarCraft as benchmarks for reinforcement learning algorithms. We focus on micromanagement, that is, the short-term, low-level control of team members during a battle. We propose several scenarios that are challenging for reinforcement learning algorithms because the state- action space is very large, and there is no obvious feature representation for the value functions. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. We also present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm collects traces for learning using deterministic policies, which appears much more efficient than, e.g., ?-greedy exploration. Experiments show that this algorithm allows to successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle. | This work introduces some StarCraft micro-management tasks (controlling individual units during a battle). These tasks are difficult for recent DeepRL methods due to high-dimensional, variable action spaces (the action space is the task of each unit, the number of units may vary). In such large action spaces, simple exploration strategies (such as epsilon-greedy) perform poorly. They introduce a novel algorithm ZO to tackle this problem. This algorithm combines ideas from policy gradient, deep networks trained with backpropagation for state embedding and gradient free optimization. The algorithm is well explained and is compared to some existing baselines. Due to the gradient free optimization providing for much better structured exploration, it performs far better. This is a well-written paper and a novel algorithm which is applied to a very relevant problem. After the success of DeepRL approaches at learning in large state spaces such as visual environment, there is significant interest in applying RL to more structured state and action spaces. The tasks introduced here are interesting environments for these sorts of problems. It would be helpful if the authors were able to share the source code / specifications for their tasks, to allow other groups to compare against this work. I found section 5 (the details of the raw inputs and feature encodings) somewhat difficult to understand. In addition to clarifying, the authors might wish to consider whether they could provide the source code to their algorithm or at least the encoder to allow careful comparisons by other work. Although discussed, there is no baseline comparison with valued based approaches with attempt to do better exploration by modeling uncertainty (such as Bootstrapped DQN). It would useful to understand how such approaches, which also promise better exploration, compare. It would also be interesting to discuss whether action embedding models such as energy-based approaches (e.g. https://arxiv.org/pdf/1512.07679v2.pdf, http://www.jmlr.org/papers/volume5/sallans04a/sallans04a.pdf ) or continuous action embeddings (https://arxiv.org/pdf/1512.07679v2.pdf ) would provide an alternative approach for structured exploration in these action spaces. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEpisodic Exploration for Deep Deterministic Policies for StarCraft Micromanagement\nWe consider scenarios from the real-time strategy game StarCraft as benchmarks for reinforcement learning algorithms. We focus on micromanagement, that is, the short-term, low-level control of team members during a battle. We propose several scenarios that are challenging for reinforcement learning algorithms because the state- action space is very large, and there is no obvious feature representation for the value functions. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. We also present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm collects traces for learning using deterministic policies, which appears much more efficient than, e.g., ?-greedy exploration. Experiments show that this algorithm allows to successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work introduces some StarCraft micro-management tasks (controlling individual units during a battle). These tasks are difficult for recent DeepRL methods due to high-dimensional, variable action spaces (the action space is the task of each unit, the number of units may vary). In such large action spaces, simple exploration strategies (such as epsilon-greedy) perform poorly. They introduce a novel algorithm ZO to tackle this problem. This algorithm combines ideas from policy gradient, deep networks trained with backpropagation for state embedding and gradient free optimization. The algorithm is well explained and is compared to some existing baselines. Due to the gradient free optimization providing for much better structured exploration, it performs far better. This is a well-written paper and a novel algorithm which is applied to a very relevant problem. After the success of DeepRL approaches at learning in large state spaces such as visual environment, there is significant interest in applying RL to more structured state and action spaces. The tasks introduced here are interesting environments for these sorts of problems. It would be helpful if the authors were able to share the source code / specifications for their tasks, to allow other groups to compare against this work. I found section 5 (the details of the raw inputs and feature encodings) somewhat difficult to understand. In addition to clarifying, the authors might wish to consider whether they could provide the source code to their algorithm or at least the encoder to allow careful comparisons by other work. Although discussed, there is no baseline comparison with valued based approaches with attempt to do better exploration by modeling uncertainty (such as Bootstrapped DQN). It would useful to understand how such approaches, which also promise better exploration, compare. It would also be interesting to discuss whether action embedding models such as energy-based approaches (e.g. https://arxiv.org/pdf/1512.07679v2.pdf, http://www.jmlr.org/papers/volume5/sallans04a/sallans04a.pdf ) or continuous action embeddings (https://arxiv.org/pdf/1512.07679v2.pdf ) would provide an alternative approach for structured exploration in these action spaces.",
"role": "assistant"
}
]
| [
"This work introduces some StarCraft micro-management tasks (controlling individual units during a battle).",
"These tasks are difficult for recent DeepRL methods due to high-dimensional, variable action spaces (the action space is the task of each unit, the number of units may vary).",
"In such large action spaces, simple exploration strategies (such as epsilon-greedy) perform poorly.",
"They introduce a novel algorithm ZO to tackle this problem.",
"This algorithm combines ideas from policy gradient, deep networks trained with backpropagation for state embedding and gradient free optimization.",
"The algorithm is well explained and is compared to some existing baselines.",
"Due to the gradient free optimization providing for much better structured exploration, it performs far better.",
"This is a well-written paper and a novel algorithm which is applied to a very relevant problem.",
"After the success of DeepRL approaches at learning in large state spaces such as visual environment, there is significant interest in applying RL to more structured state and action spaces.",
"The tasks introduced here are interesting environments for these sorts of problems.",
"It would be helpful if the authors were able to share the source code / specifications for their tasks, to allow other groups to compare against this work.",
"I found section 5 (the details of the raw inputs and feature encodings) somewhat difficult to understand.",
"In addition to clarifying, the authors might wish to consider whether they could provide the source code to their algorithm or at least the encoder to allow careful comparisons by other work.",
"Although discussed, there is no baseline comparison with valued based approaches with attempt to do better exploration by modeling uncertainty (such as Bootstrapped DQN).",
"It would useful to understand how such approaches, which also promise better exploration, compare.",
"It would also be interesting to discuss whether action embedding models such as energy-based approaches (e.g.",
"https://arxiv.org/pdf/1512.07679v2.pdf, http://www.jmlr.org/papers/volume5/sallans04a/sallans04a.pdf ) or continuous action embeddings (https://arxiv.org/pdf/1512.07679v2.pdf ) would provide an alternative approach for structured exploration in these action spaces."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 6,
"materials_and_methods": 15,
"praise": 4,
"presentation_and_reporting": 4,
"results_and_discussion": 2,
"suggestion_and_solution": 4,
"total": 17
} | 0.176471 | 0 | 0.352941 | 0.882353 | 0.235294 | 0.235294 | 0.117647 | 0.235294 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.235294 | 1.982775 | 0.252519 |
Soft Weight-Sharing for Neural Network Compression | The success of deep learning in numerous application domains created the desire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work by Han et al. (2016) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of *soft weight-sharing* (Nowlan & Hinton, 1991). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle. | This paper proposes to use an empirical Bayesian approach to learn the parameters of a neural network, and their priors. A mixture model prior over the weights leads to a clustering effect in the weight posterior distributions (which are approximated with delta peaks). This clustering effect can exploited for parameter quantisation and compression of the network parameters. The authors show that this leads to compression rates and predictive accuracy comparable to related approaches. Earlier work [Han et al. 2015] is based on a three-stage process of pruning small magnitude weights, clustering the remaining ones, and updating the cluster centres to optimise performance. The current work provides a more principled approach that does not have such an ad-hoc multi-stage structure, but a single iterative optimisation process. A first experiment, described in section 6.1 shows that an empirical Bayes’ approach, without the use of hyper priors, already leads to a pronounced clustering effect and to setting many weights to zero. In particular a compression rate of 64.2 is obtained on the LeNet300-100 model. In section 6.1 the text refers to figure C, I suppose this should be figure 1. Section 6.2 describes an experiment where hyper-priors are used, and the parameters of these distributions, as well as other hyper-parameters such as the learning rates, are being optimised using Spearmint (Snoek et al., 2012). Figure 2 shows the performance of the different points in the hyper-parameter space that have been evaluated (each trained network gives an accuracy-compressionrate point in the graph). The text claims that best results lie on a line, this seems a little opportunistic interpretation given the limited data. Moreover, it would be useful to add a small discussion on whether such a linear relationship would be expected or not. Currently the results of this experiment lack interpretation. Section 6.3 describes results obtained for both CNN models and compares results to the recent results of (Han et al., 2015) and (Guo et al., 2016). Comparable results are obtained in terms of compression rate and accuracy. The authors state that their current algorithm is too slow to be useful for larger models such as VGG-19, but they do briefly report some results obtained for this model (but do not compare to related work). It would be useful here to explain what slows the training down with respect to standard training without the weight clustering approach, and how the proposed algorithm scales in terms of the relevant quantities of the data and the model. The contribution of this paper is mostly experimental, leveraging fairly standard ideas from empirical Bayesian learning to introduce weight clustering effects in CNN training. This being said, it is an interesting result that such a relatively straightforward approach leads to results that are on par with state-of-the-art, but more ad-hoc, network compression techniques. The paper could be improved by clearly describing the algorithm used for training, and how it scales to large networks and datasets. Another point that would deserve further discussion is how the hyper-parameter search is performed ( not using test data I assume), and how the compared methods dealt with the search over hyper-parameters to determine the accuracy-compression tradeoff. Ideally, I think, methods should be evaluated across different points on this trade-off. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSoft Weight-Sharing for Neural Network Compression\nThe success of deep learning in numerous application domains created the desire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work by Han et al. (2016) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of *soft weight-sharing* (Nowlan & Hinton, 1991). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to use an empirical Bayesian approach to learn the parameters of a neural network, and their priors. A mixture model prior over the weights leads to a clustering effect in the weight posterior distributions (which are approximated with delta peaks). This clustering effect can exploited for parameter quantisation and compression of the network parameters. The authors show that this leads to compression rates and predictive accuracy comparable to related approaches. Earlier work [Han et al. 2015] is based on a three-stage process of pruning small magnitude weights, clustering the remaining ones, and updating the cluster centres to optimise performance. The current work provides a more principled approach that does not have such an ad-hoc multi-stage structure, but a single iterative optimisation process. A first experiment, described in section 6.1 shows that an empirical Bayes’ approach, without the use of hyper priors, already leads to a pronounced clustering effect and to setting many weights to zero. In particular a compression rate of 64.2 is obtained on the LeNet300-100 model. In section 6.1 the text refers to figure C, I suppose this should be figure 1. Section 6.2 describes an experiment where hyper-priors are used, and the parameters of these distributions, as well as other hyper-parameters such as the learning rates, are being optimised using Spearmint (Snoek et al., 2012). Figure 2 shows the performance of the different points in the hyper-parameter space that have been evaluated (each trained network gives an accuracy-compressionrate point in the graph). The text claims that best results lie on a line, this seems a little opportunistic interpretation given the limited data. Moreover, it would be useful to add a small discussion on whether such a linear relationship would be expected or not. Currently the results of this experiment lack interpretation. Section 6.3 describes results obtained for both CNN models and compares results to the recent results of (Han et al., 2015) and (Guo et al., 2016). Comparable results are obtained in terms of compression rate and accuracy. The authors state that their current algorithm is too slow to be useful for larger models such as VGG-19, but they do briefly report some results obtained for this model (but do not compare to related work). It would be useful here to explain what slows the training down with respect to standard training without the weight clustering approach, and how the proposed algorithm scales in terms of the relevant quantities of the data and the model. The contribution of this paper is mostly experimental, leveraging fairly standard ideas from empirical Bayesian learning to introduce weight clustering effects in CNN training. This being said, it is an interesting result that such a relatively straightforward approach leads to results that are on par with state-of-the-art, but more ad-hoc, network compression techniques. The paper could be improved by clearly describing the algorithm used for training, and how it scales to large networks and datasets. Another point that would deserve further discussion is how the hyper-parameter search is performed ( not using test data I assume), and how the compared methods dealt with the search over hyper-parameters to determine the accuracy-compression tradeoff. Ideally, I think, methods should be evaluated across different points on this trade-off.",
"role": "assistant"
}
]
| [
"This paper proposes to use an empirical Bayesian approach to learn the parameters of a neural network, and their priors.",
"A mixture model prior over the weights leads to a clustering effect in the weight posterior distributions (which are approximated with delta peaks).",
"This clustering effect can exploited for parameter quantisation and compression of the network parameters.",
"The authors show that this leads to compression rates and predictive accuracy comparable to related approaches.",
"Earlier work [Han et al.",
"2015] is based on a three-stage process of pruning small magnitude weights, clustering the remaining ones, and updating the cluster centres to optimise performance.",
"The current work provides a more principled approach that does not have such an ad-hoc multi-stage structure, but a single iterative optimisation process.",
"A first experiment, described in section 6.1 shows that an empirical Bayes’ approach, without the use of hyper priors, already leads to a pronounced clustering effect and to setting many weights to zero.",
"In particular a compression rate of 64.2 is obtained on the LeNet300-100 model.",
"In section 6.1 the text refers to figure C, I suppose this should be figure 1.",
"Section 6.2 describes an experiment where hyper-priors are used, and the parameters of these distributions, as well as other hyper-parameters such as the learning rates, are being optimised using Spearmint (Snoek et al., 2012).",
"Figure 2 shows the performance of the different points in the hyper-parameter space that have been evaluated (each trained network gives an accuracy-compressionrate point in the graph).",
"The text claims that best results lie on a line, this seems a little opportunistic interpretation given the limited data.",
"Moreover, it would be useful to add a small discussion on whether such a linear relationship would be expected or not.",
"Currently the results of this experiment lack interpretation.",
"Section 6.3 describes results obtained for both CNN models and compares results to the recent results of (Han et al., 2015) and (Guo et al., 2016).",
"Comparable results are obtained in terms of compression rate and accuracy.",
"The authors state that their current algorithm is too slow to be useful for larger models such as VGG-19, but they do briefly report some results obtained for this model (but do not compare to related work).",
"It would be useful here to explain what slows the training down with respect to standard training without the weight clustering approach, and how the proposed algorithm scales in terms of the relevant quantities of the data and the model.",
"The contribution of this paper is mostly experimental, leveraging fairly standard ideas from empirical Bayesian learning to introduce weight clustering effects in CNN training.",
"This being said, it is an interesting result that such a relatively straightforward approach leads to results that are on par with state-of-the-art, but more ad-hoc, network compression techniques.",
"The paper could be improved by clearly describing the algorithm used for training, and how it scales to large networks and datasets.",
"Another point that would deserve further discussion is how the hyper-parameter search is performed ( not using test data I assume), and how the compared methods dealt with the search over hyper-parameters to determine the accuracy-compression tradeoff.",
"Ideally, I think, methods should be evaluated across different points on this trade-off."
] | {
"criticism": 3,
"example": 2,
"importance_and_relevance": 4,
"materials_and_methods": 19,
"praise": 2,
"presentation_and_reporting": 4,
"results_and_discussion": 15,
"suggestion_and_solution": 6,
"total": 24
} | 0.125 | 0.083333 | 0.166667 | 0.791667 | 0.083333 | 0.166667 | 0.625 | 0.25 | 24 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.291667 | 0.381993 | 1.909674 |
Soft Weight-Sharing for Neural Network Compression | The success of deep learning in numerous application domains created the desire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work by Han et al. (2016) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of *soft weight-sharing* (Nowlan & Hinton, 1991). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle. | This paper revives a classic idea involving regularization for purposes of compression for modern CNN models on resource constrained devices. Model compression is hot and we*re in the midst of lots of people rediscovering old ideas in this area so it is nice to have a paper that explicitly draws upon classic approaches from the early 90s to obtain competitive results on standard benchmarks. There*s not too much to say here: this study is an instance of a simple idea applied effectively to an important problem, written up in an illuminating manner with appropriate references to classic approaches. The addition of the filter visualizations enhances the contribution. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSoft Weight-Sharing for Neural Network Compression\nThe success of deep learning in numerous application domains created the desire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work by Han et al. (2016) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of *soft weight-sharing* (Nowlan & Hinton, 1991). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper revives a classic idea involving regularization for purposes of compression for modern CNN models on resource constrained devices. Model compression is hot and we*re in the midst of lots of people rediscovering old ideas in this area so it is nice to have a paper that explicitly draws upon classic approaches from the early 90s to obtain competitive results on standard benchmarks. There*s not too much to say here: this study is an instance of a simple idea applied effectively to an important problem, written up in an illuminating manner with appropriate references to classic approaches. The addition of the filter visualizations enhances the contribution.",
"role": "assistant"
}
]
| [
"This paper revives a classic idea involving regularization for purposes of compression for modern CNN models on resource constrained devices.",
"Model compression is hot and we*re in the midst of lots of people rediscovering old ideas in this area so it is nice to have a paper that explicitly draws upon classic approaches from the early 90s to obtain competitive results on standard benchmarks.",
"There*s not too much to say here: this study is an instance of a simple idea applied effectively to an important problem, written up in an illuminating manner with appropriate references to classic approaches.",
"The addition of the filter visualizations enhances the contribution."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 4
} | 0.25 | 0 | 0.75 | 0.75 | 0.25 | 0.25 | 0.25 | 0.25 | 4 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.75 | 1.471623 | 1.278377 |
Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a rep- resentation of the state of the world as it receives new data. For language under- standing tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory loca- tions can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass. | The paper proposed a multi-memory mechanism that memorizes different information into different components/entities. It could be considered as a mixture model in RNN. This is a very interesting model and result is convincing. A limitation is that we do not know how to generalize to some unseen entities and how to visualize what entities the model learned. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTracking the World State with Recurrent Entity Networks\nWe introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a rep- resentation of the state of the world as it receives new data. For language under- standing tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory loca- tions can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposed a multi-memory mechanism that memorizes different information into different components/entities. It could be considered as a mixture model in RNN. This is a very interesting model and result is convincing. A limitation is that we do not know how to generalize to some unseen entities and how to visualize what entities the model learned.",
"role": "assistant"
}
]
| [
"The paper proposed a multi-memory mechanism that memorizes different information into different components/entities.",
"It could be considered as a mixture model in RNN.",
"This is a very interesting model and result is convincing.",
"A limitation is that we do not know how to generalize to some unseen entities and how to visualize what entities the model learned."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 4
} | 0.25 | 0 | 0.25 | 1 | 0.25 | 0 | 0.25 | 0.25 | 4 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.25 | 0.971623 | 1.278377 |
Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning | While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network (*PredNet*) architecture that is inspired by the concept of *predictive coding* from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. These results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure. | Learning about the physical structure and semantics of the world from video (without supervision) is a very hot area in computer vision and machine learning. In this paper, the authors investigate how the prediction of future image frames (inherently unsupervised) can help to deduce object/s structure and it*s properties (in this case single object pose, category, and steering angle, (after a supervised linear readout step)) I enjoyed reading this paper, it is clear, interesting and proposes an original network architecture (PredNet) for video frame prediction that has produced promising results on both synthetic and natural images. Moreover, the extensive experimental evaluation and analysis the authors provide puts it on solid ground to which others can compare. The weaknesses: - the link to predictive coding should be better explained in the paper if it is to be used as a motivation for the prednet model. - any idea that the proposed method is learning an implicit `model* of the `objects* that make up the `scene* is vague and far fetched, but it sounds great. Minor comment: Next to the number of labeled training examples (Fig.5), it would be interesting to see how much unsupervised training data was used to train your representations. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Predictive Coding Networks for Video Prediction and Unsupervised Learning\nWhile great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network (*PredNet*) architecture that is inspired by the concept of *predictive coding* from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. These results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nLearning about the physical structure and semantics of the world from video (without supervision) is a very hot area in computer vision and machine learning. In this paper, the authors investigate how the prediction of future image frames (inherently unsupervised) can help to deduce object/s structure and it*s properties (in this case single object pose, category, and steering angle, (after a supervised linear readout step)) I enjoyed reading this paper, it is clear, interesting and proposes an original network architecture (PredNet) for video frame prediction that has produced promising results on both synthetic and natural images. Moreover, the extensive experimental evaluation and analysis the authors provide puts it on solid ground to which others can compare. The weaknesses: - the link to predictive coding should be better explained in the paper if it is to be used as a motivation for the prednet model. - any idea that the proposed method is learning an implicit `model* of the `objects* that make up the `scene* is vague and far fetched, but it sounds great. Minor comment: Next to the number of labeled training examples (Fig.5), it would be interesting to see how much unsupervised training data was used to train your representations.",
"role": "assistant"
}
]
| [
"Learning about the physical structure and semantics of the world from video (without supervision) is a very hot area in computer vision and machine learning.",
"In this paper, the authors investigate how the prediction of future image frames (inherently unsupervised) can help to deduce object/s structure and it*s properties (in this case single object pose, category, and steering angle, (after a supervised linear readout step)) I enjoyed reading this paper, it is clear, interesting and proposes an original network architecture (PredNet) for video frame prediction that has produced promising results on both synthetic and natural images.",
"Moreover, the extensive experimental evaluation and analysis the authors provide puts it on solid ground to which others can compare.",
"The weaknesses: - the link to predictive coding should be better explained in the paper if it is to be used as a motivation for the prednet model.",
"- any idea that the proposed method is learning an implicit `model* of the `objects* that make up the `scene* is vague and far fetched, but it sounds great.",
"Minor comment: Next to the number of labeled training examples (Fig.5), it would be interesting to see how much unsupervised training data was used to train your representations."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 6
} | 0.166667 | 0.166667 | 0.333333 | 0.833333 | 0.166667 | 0.333333 | 0.166667 | 0.333333 | 6 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.5 | 1.726661 | 0.773339 |
Diet Networks: Thin Parameters for Fat Genomics | Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature*s distributed representation (based on the feature*s identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier. | The paper presents an application of deep learning to genomic SNP data with a comparison of possible approaches for dealing with the very high data dimensionality. The approach looks very interesting but the experiments are too limited to draw firm conclusions about the strengths of different approaches. The presentation would benefit from more precise math. Quality: The basic idea of the paper is interesting and the applied deep learning methodology appears reasonable. The experimental evaluation is rather weak as it only covers a single data set and a very limited number of cross validation folds. Given the significant variation in the performances of all the methods, it seems the differences between the better-performing methods are probably not statistically significant. More comprehensive empirical validation could clearly strengthen the paper. Clarity: The writing is generally good both in terms of the biology and ML, but more mathematical rigour would make it easier to understand precisely what was done. The different architectures are explained on an intuitive level and might benefit from a clear mathematical definition. I was ultimately left unsure of what the *raw end2end* model is - given so few parameters it cannot work on raw 300k dimensional input but I could not figure out what kind of embedding was used. The results in Fig. 3 might be clearer if scaled so that maximum for each class is 1 to avoid confounding from different numbers of subjects in different classes. In the text, please use the standard italics math font for all symbols such as N, N_d, ... Originality: The application and the approach appear quite novel. Significance: There is clearly strong interest for deep learning in the genomics area and the paper seeks to address some of the major bottlenecks here. It is too early to tell whether the specific techniques proposed in the paper will be the ultimate solution, but at the very least the paper provides interesting new ideas for others to work on. Other comments: I think releasing the code as promised would be a must. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDiet Networks: Thin Parameters for Fat Genomics\nLearning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature*s distributed representation (based on the feature*s identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents an application of deep learning to genomic SNP data with a comparison of possible approaches for dealing with the very high data dimensionality. The approach looks very interesting but the experiments are too limited to draw firm conclusions about the strengths of different approaches. The presentation would benefit from more precise math. Quality: The basic idea of the paper is interesting and the applied deep learning methodology appears reasonable. The experimental evaluation is rather weak as it only covers a single data set and a very limited number of cross validation folds. Given the significant variation in the performances of all the methods, it seems the differences between the better-performing methods are probably not statistically significant. More comprehensive empirical validation could clearly strengthen the paper. Clarity: The writing is generally good both in terms of the biology and ML, but more mathematical rigour would make it easier to understand precisely what was done. The different architectures are explained on an intuitive level and might benefit from a clear mathematical definition. I was ultimately left unsure of what the *raw end2end* model is - given so few parameters it cannot work on raw 300k dimensional input but I could not figure out what kind of embedding was used. The results in Fig. 3 might be clearer if scaled so that maximum for each class is 1 to avoid confounding from different numbers of subjects in different classes. In the text, please use the standard italics math font for all symbols such as N, N_d, ... Originality: The application and the approach appear quite novel. Significance: There is clearly strong interest for deep learning in the genomics area and the paper seeks to address some of the major bottlenecks here. It is too early to tell whether the specific techniques proposed in the paper will be the ultimate solution, but at the very least the paper provides interesting new ideas for others to work on. Other comments: I think releasing the code as promised would be a must.",
"role": "assistant"
}
]
| [
"The paper presents an application of deep learning to genomic SNP data with a comparison of possible approaches for dealing with the very high data dimensionality.",
"The approach looks very interesting but the experiments are too limited to draw firm conclusions about the strengths of different approaches.",
"The presentation would benefit from more precise math.",
"Quality: The basic idea of the paper is interesting and the applied deep learning methodology appears reasonable.",
"The experimental evaluation is rather weak as it only covers a single data set and a very limited number of cross validation folds.",
"Given the significant variation in the performances of all the methods, it seems the differences between the better-performing methods are probably not statistically significant.",
"More comprehensive empirical validation could clearly strengthen the paper.",
"Clarity: The writing is generally good both in terms of the biology and ML, but more mathematical rigour would make it easier to understand precisely what was done.",
"The different architectures are explained on an intuitive level and might benefit from a clear mathematical definition.",
"I was ultimately left unsure of what the *raw end2end* model is - given so few parameters it cannot work on raw 300k dimensional input but I could not figure out what kind of embedding was used.",
"The results in Fig.",
"3 might be clearer if scaled so that maximum for each class is 1 to avoid confounding from different numbers of subjects in different classes.",
"In the text, please use the standard italics math font for all symbols such as N, N_d, ...",
"Originality: The application and the approach appear quite novel.",
"Significance: There is clearly strong interest for deep learning in the genomics area and the paper seeks to address some of the major bottlenecks here.",
"It is too early to tell whether the specific techniques proposed in the paper will be the ultimate solution, but at the very least the paper provides interesting new ideas for others to work on.",
"Other comments: I think releasing the code as promised would be a must."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 12,
"praise": 7,
"presentation_and_reporting": 5,
"results_and_discussion": 3,
"suggestion_and_solution": 7,
"total": 17
} | 0.294118 | 0 | 0.294118 | 0.705882 | 0.411765 | 0.294118 | 0.176471 | 0.411765 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.588235 | 2.335716 | 0.252519 |
Diet Networks: Thin Parameters for Fat Genomics | Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature*s distributed representation (based on the feature*s identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier. | The paper addresses the important problem (d>>n) in deep learning. The proposed approach, based on lower-dimensional feature embeddings, is reasonable and makes applying deep learning methods to data with large d possible. The paper is well written and the results show improvements over reasonable baselines. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDiet Networks: Thin Parameters for Fat Genomics\nLearning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature*s distributed representation (based on the feature*s identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper addresses the important problem (d>>n) in deep learning. The proposed approach, based on lower-dimensional feature embeddings, is reasonable and makes applying deep learning methods to data with large d possible. The paper is well written and the results show improvements over reasonable baselines.",
"role": "assistant"
}
]
| [
"The paper addresses the important problem (d>>n) in deep learning.",
"The proposed approach, based on lower-dimensional feature embeddings, is reasonable and makes applying deep learning methods to data with large d possible.",
"The paper is well written and the results show improvements over reasonable baselines."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 3
} | 0 | 0 | 0.333333 | 0.666667 | 1 | 0.333333 | 0.333333 | 0 | 3 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.666667 | 1.088424 | 1.578243 |
Deep Biaffine Attention for Neural Dependency Parsing | This paper builds off recent work from Kiperwasser & Goldberg (2016) using neural attention in a simple graph-based dependency parser. We use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with biaffine classifiers to predict arcs and labels. Our parser gets state of the art or near state of the art performance on standard treebanks for six different languages, achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset. This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser & Goldberg (2016) by 1.8% and 2.2%—and comparable to the highest performing transition-based parser (Kuncoro et al., 2016), which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameter choices had a significant effect on parsing accuracy, allowing us to achieve large gains over other graph-based approaches. | The paper proposes a new function for computing arc score between two words in a sentence for dependency parsing. The proposed function is biaffine in the sense that it*s a combination of a bilinear score function and a bias term playing a role as prior. The paper reports new state-of-the-art dependency parsing performances on both English PTB and Chinese TB. The paper is very well written with impressive experimental results and analysis. However, the idea is hardly novel regarding to the theme of the conference: the framework that the paper uses is from Kiperwasser & Goldberg (2016), the use of bilinear score function for attention is from Luong et al (2015). Projecting BiLSTM outputs into different spaces using MLPs is a trivial step to make the model *deeper*, whereas adding linear bias terms isn*t confirmed to work in the experiments (table 2 shows that diag bilinear has a close performance to biaffine). I think that this paper is more proper for NLP conferences. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Biaffine Attention for Neural Dependency Parsing\nThis paper builds off recent work from Kiperwasser & Goldberg (2016) using neural attention in a simple graph-based dependency parser. We use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with biaffine classifiers to predict arcs and labels. Our parser gets state of the art or near state of the art performance on standard treebanks for six different languages, achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset. This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser & Goldberg (2016) by 1.8% and 2.2%—and comparable to the highest performing transition-based parser (Kuncoro et al., 2016), which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameter choices had a significant effect on parsing accuracy, allowing us to achieve large gains over other graph-based approaches.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a new function for computing arc score between two words in a sentence for dependency parsing. The proposed function is biaffine in the sense that it*s a combination of a bilinear score function and a bias term playing a role as prior. The paper reports new state-of-the-art dependency parsing performances on both English PTB and Chinese TB. The paper is very well written with impressive experimental results and analysis. However, the idea is hardly novel regarding to the theme of the conference: the framework that the paper uses is from Kiperwasser & Goldberg (2016), the use of bilinear score function for attention is from Luong et al (2015). Projecting BiLSTM outputs into different spaces using MLPs is a trivial step to make the model *deeper*, whereas adding linear bias terms isn*t confirmed to work in the experiments (table 2 shows that diag bilinear has a close performance to biaffine). I think that this paper is more proper for NLP conferences.",
"role": "assistant"
}
]
| [
"The paper proposes a new function for computing arc score between two words in a sentence for dependency parsing.",
"The proposed function is biaffine in the sense that it*s a combination of a bilinear score function and a bias term playing a role as prior.",
"The paper reports new state-of-the-art dependency parsing performances on both English PTB and Chinese TB.",
"The paper is very well written with impressive experimental results and analysis.",
"However, the idea is hardly novel regarding to the theme of the conference: the framework that the paper uses is from Kiperwasser & Goldberg (2016), the use of bilinear score function for attention is from Luong et al (2015).",
"Projecting BiLSTM outputs into different spaces using MLPs is a trivial step to make the model *deeper*, whereas adding linear bias terms isn*t confirmed to work in the experiments (table 2 shows that diag bilinear has a close performance to biaffine).",
"I think that this paper is more proper for NLP conferences."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 7
} | 0.285714 | 0.142857 | 0.142857 | 0.714286 | 0.142857 | 0.428571 | 0.142857 | 0.142857 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
PixelVAE: A Latent Variable Model for Natural Images | Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models details very well, but lacks a latent code and is difficult to scale for capturing large structures. We present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. Our model requires very few expensive autoregressive layers compared to PixelCNN and learns latent codes that are more compressed than a standard VAE while still capturing most non-trivial structure. Finally, we extend our model to a hierarchy of latent variables at different scales. Our model achieves state-of-the-art performance on binarized MNIST, competitive performance on 64 × 64 ImageNet, and high-quality samples on the LSUN bedrooms dataset. | The paper combines a hierarchical Variational Autoencoder with PixelCNNs to model the distribution of natural images. They report good (although not state of the art) likelihoods on natural images and briefly start to explore what information is encoded by the latent representations in the hierarchical VAE. I believe that combining the PixelCNN with a VAE, as was already suggested in the PixelCNN paper, is an important and interesting contribution. The encoding of high-, mid- and low-level variations at the different latent stages is interesting but seems not terribly surprising, since the size of the image regions the latent variables model is also at the corresponding scale. Showing that the PixelCNN improves the latent representation of the VAE with regard to some interesting task would be a much stronger result. Also, while the paper claims, that combining the PixelCNN with the VAE reduces the number of computationally expensive autoregressive layers, it remains unclear how much more efficient their whole model is than an PixelCNN with comparable likelihood. In general, I find the clarity of the presentation wanting. For example, I agree with reviewer1 that the exact structure of their model remains unclear from the paper and would be difficult to reproduce. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPixelVAE: A Latent Variable Model for Natural Images\nNatural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models details very well, but lacks a latent code and is difficult to scale for capturing large structures. We present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. Our model requires very few expensive autoregressive layers compared to PixelCNN and learns latent codes that are more compressed than a standard VAE while still capturing most non-trivial structure. Finally, we extend our model to a hierarchy of latent variables at different scales. Our model achieves state-of-the-art performance on binarized MNIST, competitive performance on 64 × 64 ImageNet, and high-quality samples on the LSUN bedrooms dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper combines a hierarchical Variational Autoencoder with PixelCNNs to model the distribution of natural images. They report good (although not state of the art) likelihoods on natural images and briefly start to explore what information is encoded by the latent representations in the hierarchical VAE. I believe that combining the PixelCNN with a VAE, as was already suggested in the PixelCNN paper, is an important and interesting contribution. The encoding of high-, mid- and low-level variations at the different latent stages is interesting but seems not terribly surprising, since the size of the image regions the latent variables model is also at the corresponding scale. Showing that the PixelCNN improves the latent representation of the VAE with regard to some interesting task would be a much stronger result. Also, while the paper claims, that combining the PixelCNN with the VAE reduces the number of computationally expensive autoregressive layers, it remains unclear how much more efficient their whole model is than an PixelCNN with comparable likelihood. In general, I find the clarity of the presentation wanting. For example, I agree with reviewer1 that the exact structure of their model remains unclear from the paper and would be difficult to reproduce.",
"role": "assistant"
}
]
| [
"The paper combines a hierarchical Variational Autoencoder with PixelCNNs to model the distribution of natural images.",
"They report good (although not state of the art) likelihoods on natural images and briefly start to explore what information is encoded by the latent representations in the hierarchical VAE.",
"I believe that combining the PixelCNN with a VAE, as was already suggested in the PixelCNN paper, is an important and interesting contribution.",
"The encoding of high-, mid- and low-level variations at the different latent stages is interesting but seems not terribly surprising, since the size of the image regions the latent variables model is also at the corresponding scale.",
"Showing that the PixelCNN improves the latent representation of the VAE with regard to some interesting task would be a much stronger result.",
"Also, while the paper claims, that combining the PixelCNN with the VAE reduces the number of computationally expensive autoregressive layers, it remains unclear how much more efficient their whole model is than an PixelCNN with comparable likelihood.",
"In general, I find the clarity of the presentation wanting.",
"For example, I agree with reviewer1 that the exact structure of their model remains unclear from the paper and would be difficult to reproduce."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 4,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 8
} | 0.375 | 0.125 | 0.375 | 0.5 | 0.5 | 0.125 | 0.25 | 0.125 | 8 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.375 | 1.980439 | 0.394561 |
Snapshot Ensembles: Train 1, Get M for Free | Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural network, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with traditional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively. | I don*t have much to add to my pre-review questions. The main thing I*d like to see that would strengthen my review further is a larger scale evaluation, more discussion of the hyperparameters, etc. Where test error are reported for snapshot ensembles it would be useful to report statistics about the performance of individual ensemble members for comparison (mean and standard deviation, maybe best single member*s error rate). | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSnapshot Ensembles: Train 1, Get M for Free\nEnsembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural network, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with traditional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI don*t have much to add to my pre-review questions. The main thing I*d like to see that would strengthen my review further is a larger scale evaluation, more discussion of the hyperparameters, etc. Where test error are reported for snapshot ensembles it would be useful to report statistics about the performance of individual ensemble members for comparison (mean and standard deviation, maybe best single member*s error rate).",
"role": "assistant"
}
]
| [
"I don*t have much to add to my pre-review questions.",
"The main thing I*d like to see that would strengthen my review further is a larger scale evaluation, more discussion of the hyperparameters, etc.",
"Where test error are reported for snapshot ensembles it would be useful to report statistics about the performance of individual ensemble members for comparison (mean and standard deviation, maybe best single member*s error rate)."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 2,
"praise": 0,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 3
} | 0.333333 | 0 | 0 | 0.666667 | 0 | 0.333333 | 0.666667 | 0.666667 | 3 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 2.666667 | 1.088424 | 1.578243 |
Training Agent for First-Person Shooter Game with Actor-Critic Curriculum Learning | In this paper, we propose a novel framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom. Our framework combines the state-of-the-art reinforcement learning approach (Asynchronous Advantage Actor-Critic (A3C) model) with curriculum learning. Our model is simple in design and only uses game states from the AI side, rather than using opponents* information. On a known map, our agent won 10 out of the 11 attended games and the champion of Track1 in ViZDoom AI Competition 2016 by a large margin, 35\% higher score than the second place. | This paper basically applies A3C to 3D spatial navigation tasks. - This is not the first time A3C has been applied to 3D navigation. In fact the original paper reported these experiments. Although the experimental results are great, I am not sure if this paper has any additional insights to warrant itself as a conference paper. It might make more sense as a workshop paper - Are the graphs in Fig 5 constructed using a single hyper-parameter sweep? I think the authors should report results with many random initializations to make the comparisons more robust - Overall the two main ideas in this paper -- A3C and curriculums -- are not really novel but the authors do make use of them in a real system. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining Agent for First-Person Shooter Game with Actor-Critic Curriculum Learning\nIn this paper, we propose a novel framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom. Our framework combines the state-of-the-art reinforcement learning approach (Asynchronous Advantage Actor-Critic (A3C) model) with curriculum learning. Our model is simple in design and only uses game states from the AI side, rather than using opponents* information. On a known map, our agent won 10 out of the 11 attended games and the champion of Track1 in ViZDoom AI Competition 2016 by a large margin, 35\\% higher score than the second place.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper basically applies A3C to 3D spatial navigation tasks. - This is not the first time A3C has been applied to 3D navigation. In fact the original paper reported these experiments. Although the experimental results are great, I am not sure if this paper has any additional insights to warrant itself as a conference paper. It might make more sense as a workshop paper - Are the graphs in Fig 5 constructed using a single hyper-parameter sweep? I think the authors should report results with many random initializations to make the comparisons more robust - Overall the two main ideas in this paper -- A3C and curriculums -- are not really novel but the authors do make use of them in a real system.",
"role": "assistant"
}
]
| [
"This paper basically applies A3C to 3D spatial navigation tasks.",
"- This is not the first time A3C has been applied to 3D navigation.",
"In fact the original paper reported these experiments.",
"Although the experimental results are great, I am not sure if this paper has any additional insights to warrant itself as a conference paper.",
"It might make more sense as a workshop paper - Are the graphs in Fig 5 constructed using a single hyper-parameter sweep?",
"I think the authors should report results with many random initializations to make the comparisons more robust - Overall the two main ideas in this paper -- A3C and curriculums -- are not really novel but the authors do make use of them in a real system."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 6
} | 0.333333 | 0.166667 | 0.166667 | 0.833333 | 0.166667 | 0 | 0.333333 | 0.333333 | 6 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 2.333333 | 1.559994 | 0.773339 |
Decomposing Motion and Content for Natural Video Sequence Prediction | We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the pro- posed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos. | 1) Summary This paper investigates the usefulness of decoupling appearance and motion information for the problem of future frame prediction in natural videos. The method introduces a novel two-stream encoder-decoder architecture, MCNet, consisting of two separate encoders -- a convnet on single frames and a convnet+LSTM on sequences of temporal differences -- followed by combination layers (stacking + convolutions) and a deconvolutional network decoder leveraging also residual connections from the two encoders. The architecture is trained end-to-end using the objective and adversarial training strategy of Mathieu et al. 2) Contributions + The architecture seems novel and is well motivated. It is also somewhat related to the two-stream networks of Simonyan & Zisserman, which are very effective for real-world action recognition. + The qualitative results are numerous, insightful, and very convincing (including quantitatively) on KTH & Weizmann, showing the benefits of decoupling content and motion for simple scenes with periodic motions, as well as the need for residual connections. 3) Suggestions for improvement Static dataset bias: In response to the pre-review concerns about the observed static nature of the qualitative results, the authors added a simple baseline consisting in copying the pixels of the last observed frame. On the one hand, the updated experiments on KTH confirm the good results of the method in these conditions. On the other hand, the fact that this baseline is better than all other methods (not just the authors*s) on UCF101 casts some doubts on whether reporting average statistics on UCF101 is insightful enough. Although the authors provide some qualitative analysis pertaining to the quantity of motion, further quantitative analysis seems necessary to validate the performance of this and other methods on future frame prediction. At least, the results on UCF101 should be disambiguated with respect to the type of scene, for instance by measuring the overall quantity of motion (e.g., l2 norm of time differences) and reporting PSNR and SSIM per quartile / decile. Ideally, other realistic datasets than UCF101 should be considered in complement. For instance, the Hollywood 2 dataset of Marszalek et al would be a good candidate, as it focuses on movies and often contains complex actor, camera, and background motions that would make the *pixel-copying* baseline very poor. Experiments on video datasets beyond actions, like the KITTI tracking benchmark, would also greatly improve the paper. Additional recognition experiments: As mentioned in pre-review questions, further UCF-101 experiments on action recognition tasks by fine-tuning would also greatly improve the paper. Classifying videos indeed requires learning both appearance and motion features, and the two-stream encoder + combination layers of the MCNet+Res architecture seem particularly adapted, if they indeed allowed for unsupervised pre-trainining of content and motion representations, as postulated by the authors. These experiments would also contribute to dispelling the aforementioned concerns about the static nature of the learned representations. 4) Conclusion Overall, this paper proposes an interesting architecture for an important problem, but requires additional experiments to substantiate the claims made by the authors. If the authors make the aforementioned additional experiments and the results are convincing, then this paper would be clearly relevant for ICLR. 5) Post-rebuttal final decision The authors did a significant amount of additional work, following the suggestions made by the reviewers, and providing additional compelling experimental evidence. This makes this one of the most experimentally thorough ones for this problem. I, therefore, increase my rating, and suggest to accept this paper. Good job! | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDecomposing Motion and Content for Natural Video Sequence Prediction\nWe propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the pro- posed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n1) Summary This paper investigates the usefulness of decoupling appearance and motion information for the problem of future frame prediction in natural videos. The method introduces a novel two-stream encoder-decoder architecture, MCNet, consisting of two separate encoders -- a convnet on single frames and a convnet+LSTM on sequences of temporal differences -- followed by combination layers (stacking + convolutions) and a deconvolutional network decoder leveraging also residual connections from the two encoders. The architecture is trained end-to-end using the objective and adversarial training strategy of Mathieu et al. 2) Contributions + The architecture seems novel and is well motivated. It is also somewhat related to the two-stream networks of Simonyan & Zisserman, which are very effective for real-world action recognition. + The qualitative results are numerous, insightful, and very convincing (including quantitatively) on KTH & Weizmann, showing the benefits of decoupling content and motion for simple scenes with periodic motions, as well as the need for residual connections. 3) Suggestions for improvement Static dataset bias: In response to the pre-review concerns about the observed static nature of the qualitative results, the authors added a simple baseline consisting in copying the pixels of the last observed frame. On the one hand, the updated experiments on KTH confirm the good results of the method in these conditions. On the other hand, the fact that this baseline is better than all other methods (not just the authors*s) on UCF101 casts some doubts on whether reporting average statistics on UCF101 is insightful enough. Although the authors provide some qualitative analysis pertaining to the quantity of motion, further quantitative analysis seems necessary to validate the performance of this and other methods on future frame prediction. At least, the results on UCF101 should be disambiguated with respect to the type of scene, for instance by measuring the overall quantity of motion (e.g., l2 norm of time differences) and reporting PSNR and SSIM per quartile / decile. Ideally, other realistic datasets than UCF101 should be considered in complement. For instance, the Hollywood 2 dataset of Marszalek et al would be a good candidate, as it focuses on movies and often contains complex actor, camera, and background motions that would make the *pixel-copying* baseline very poor. Experiments on video datasets beyond actions, like the KITTI tracking benchmark, would also greatly improve the paper. Additional recognition experiments: As mentioned in pre-review questions, further UCF-101 experiments on action recognition tasks by fine-tuning would also greatly improve the paper. Classifying videos indeed requires learning both appearance and motion features, and the two-stream encoder + combination layers of the MCNet+Res architecture seem particularly adapted, if they indeed allowed for unsupervised pre-trainining of content and motion representations, as postulated by the authors. These experiments would also contribute to dispelling the aforementioned concerns about the static nature of the learned representations. 4) Conclusion Overall, this paper proposes an interesting architecture for an important problem, but requires additional experiments to substantiate the claims made by the authors. If the authors make the aforementioned additional experiments and the results are convincing, then this paper would be clearly relevant for ICLR. 5) Post-rebuttal final decision The authors did a significant amount of additional work, following the suggestions made by the reviewers, and providing additional compelling experimental evidence. This makes this one of the most experimentally thorough ones for this problem. I, therefore, increase my rating, and suggest to accept this paper. Good job!",
"role": "assistant"
}
]
| [
"1) Summary This paper investigates the usefulness of decoupling appearance and motion information for the problem of future frame prediction in natural videos.",
"The method introduces a novel two-stream encoder-decoder architecture, MCNet, consisting of two separate encoders -- a convnet on single frames and a convnet+LSTM on sequences of temporal differences -- followed by combination layers (stacking + convolutions) and a deconvolutional network decoder leveraging also residual connections from the two encoders.",
"The architecture is trained end-to-end using the objective and adversarial training strategy of Mathieu et al.",
"2) Contributions + The architecture seems novel and is well motivated.",
"It is also somewhat related to the two-stream networks of Simonyan & Zisserman, which are very effective for real-world action recognition.",
"+ The qualitative results are numerous, insightful, and very convincing (including quantitatively) on KTH & Weizmann, showing the benefits of decoupling content and motion for simple scenes with periodic motions, as well as the need for residual connections.",
"3) Suggestions for improvement Static dataset bias: In response to the pre-review concerns about the observed static nature of the qualitative results, the authors added a simple baseline consisting in copying the pixels of the last observed frame.",
"On the one hand, the updated experiments on KTH confirm the good results of the method in these conditions.",
"On the other hand, the fact that this baseline is better than all other methods (not just the authors*s) on UCF101 casts some doubts on whether reporting average statistics on UCF101 is insightful enough.",
"Although the authors provide some qualitative analysis pertaining to the quantity of motion, further quantitative analysis seems necessary to validate the performance of this and other methods on future frame prediction.",
"At least, the results on UCF101 should be disambiguated with respect to the type of scene, for instance by measuring the overall quantity of motion (e.g., l2 norm of time differences) and reporting PSNR and SSIM per quartile / decile.",
"Ideally, other realistic datasets than UCF101 should be considered in complement.",
"For instance, the Hollywood 2 dataset of Marszalek et al would be a good candidate, as it focuses on movies and often contains complex actor, camera, and background motions that would make the *pixel-copying* baseline very poor.",
"Experiments on video datasets beyond actions, like the KITTI tracking benchmark, would also greatly improve the paper.",
"Additional recognition experiments: As mentioned in pre-review questions, further UCF-101 experiments on action recognition tasks by fine-tuning would also greatly improve the paper.",
"Classifying videos indeed requires learning both appearance and motion features, and the two-stream encoder + combination layers of the MCNet+Res architecture seem particularly adapted, if they indeed allowed for unsupervised pre-trainining of content and motion representations, as postulated by the authors.",
"These experiments would also contribute to dispelling the aforementioned concerns about the static nature of the learned representations.",
"4) Conclusion Overall, this paper proposes an interesting architecture for an important problem, but requires additional experiments to substantiate the claims made by the authors.",
"If the authors make the aforementioned additional experiments and the results are convincing, then this paper would be clearly relevant for ICLR.",
"5) Post-rebuttal final decision The authors did a significant amount of additional work, following the suggestions made by the reviewers, and providing additional compelling experimental evidence.",
"This makes this one of the most experimentally thorough ones for this problem.",
"I, therefore, increase my rating, and suggest to accept this paper.",
"Good job!"
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 18,
"praise": 8,
"presentation_and_reporting": 2,
"results_and_discussion": 8,
"suggestion_and_solution": 9,
"total": 23
} | 0.043478 | 0.043478 | 0.130435 | 0.782609 | 0.347826 | 0.086957 | 0.347826 | 0.391304 | 23 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.173913 | 0.59567 | 1.578243 |
Towards a Neural Statistician | An efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes. We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision. | The authors introduce a variant of the variational autoencoder (VAE) that models dataset-level latent variables. The idea is clearly motivated and well described. In my mind the greatest contribution of this paper is the movement beyond the relatively simple graphical model structure of the traditional VAEs and the introduction of more interesting structures to the deep learning community. Comments: - It*s not clear to me why this should be called a *statistician*. Learning an approximate posterior over summary statistics is not the only imaginable way to summarize a dataset with a neural network. One could consider a maximum likelihood approach, etc. In general it felt like the paper could be more clear, if it avoided coining new terms like *statistic network* and stuck to the more accurate *approximate posterior*. - The experiments are nice, and I appreciate the response to my question regarding *one shot generation*. I still think that language needs to be clarified, specifically at the end of page 6. My understanding of Figure 5 is the following: Take an input set, compute the approximate posterior over the context vector, then generate from the forward model given samples from the approximate posterior. I would like clarification on the following: (a) Are the data point dependent vectors z generated from the forward model or taken from the approximate posterior? (b) I agree that the samples are of high-quality, but that is not a quantified statement. The advantage of VAEs over GANs is that we have natural ways of computing log-probabilities. To that end, one *proper* way of computing the *one shot generation* performance is to report log p(x | c) (where c is sampled from the approximate posterior) or log p(x) for held-out datasets. I suspect that log probability performance of these networks relative to a vanilla VAE without the context latent variable will be impressive. I still don*t see a reason not to include that. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTowards a Neural Statistician\nAn efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes. We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors introduce a variant of the variational autoencoder (VAE) that models dataset-level latent variables. The idea is clearly motivated and well described. In my mind the greatest contribution of this paper is the movement beyond the relatively simple graphical model structure of the traditional VAEs and the introduction of more interesting structures to the deep learning community. Comments: - It*s not clear to me why this should be called a *statistician*. Learning an approximate posterior over summary statistics is not the only imaginable way to summarize a dataset with a neural network. One could consider a maximum likelihood approach, etc. In general it felt like the paper could be more clear, if it avoided coining new terms like *statistic network* and stuck to the more accurate *approximate posterior*. - The experiments are nice, and I appreciate the response to my question regarding *one shot generation*. I still think that language needs to be clarified, specifically at the end of page 6. My understanding of Figure 5 is the following: Take an input set, compute the approximate posterior over the context vector, then generate from the forward model given samples from the approximate posterior. I would like clarification on the following: (a) Are the data point dependent vectors z generated from the forward model or taken from the approximate posterior? (b) I agree that the samples are of high-quality, but that is not a quantified statement. The advantage of VAEs over GANs is that we have natural ways of computing log-probabilities. To that end, one *proper* way of computing the *one shot generation* performance is to report log p(x | c) (where c is sampled from the approximate posterior) or log p(x) for held-out datasets. I suspect that log probability performance of these networks relative to a vanilla VAE without the context latent variable will be impressive. I still don*t see a reason not to include that.",
"role": "assistant"
}
]
| [
"The authors introduce a variant of the variational autoencoder (VAE) that models dataset-level latent variables.",
"The idea is clearly motivated and well described.",
"In my mind the greatest contribution of this paper is the movement beyond the relatively simple graphical model structure of the traditional VAEs and the introduction of more interesting structures to the deep learning community.",
"Comments: - It*s not clear to me why this should be called a *statistician*.",
"Learning an approximate posterior over summary statistics is not the only imaginable way to summarize a dataset with a neural network.",
"One could consider a maximum likelihood approach, etc.",
"In general it felt like the paper could be more clear, if it avoided coining new terms like *statistic network* and stuck to the more accurate *approximate posterior*.",
"- The experiments are nice, and I appreciate the response to my question regarding *one shot generation*.",
"I still think that language needs to be clarified, specifically at the end of page 6.",
"My understanding of Figure 5 is the following: Take an input set, compute the approximate posterior over the context vector, then generate from the forward model given samples from the approximate posterior.",
"I would like clarification on the following: (a) Are the data point dependent vectors z generated from the forward model or taken from the approximate posterior?",
"(b) I agree that the samples are of high-quality, but that is not a quantified statement.",
"The advantage of VAEs over GANs is that we have natural ways of computing log-probabilities.",
"To that end, one *proper* way of computing the *one shot generation* performance is to report log p(x | c) (where c is sampled from the approximate posterior) or log p(x) for held-out datasets.",
"I suspect that log probability performance of these networks relative to a vanilla VAE without the context latent variable will be impressive.",
"I still don*t see a reason not to include that."
] | {
"criticism": 3,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 13,
"praise": 3,
"presentation_and_reporting": 7,
"results_and_discussion": 7,
"suggestion_and_solution": 6,
"total": 16
} | 0.1875 | 0.125 | 0.0625 | 0.8125 | 0.1875 | 0.4375 | 0.4375 | 0.375 | 16 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.625 | 2.482958 | 0.142042 |
Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy | We propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepancy (MMD). This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples. Second, the MMD can be used to evaluate the performance of a generative model, by testing the model’s samples against a reference data set. In the latter role, the optimized MMD is particularly helpful, as it gives an interpretable indication of how the model and data distributions differ, even in cases where individual model samples are not easily distinguished either by eye or by classifier. | This is an interesting paper containing three contributions: 1) An expression for the variance of the quadratic-time MMD estimate, which can be efficiently minimized for kernel selection. 2) Advanced computational optimizations for permutation tests for the quadratic MMD statistic. 3) Crystal-clear examples on the importance of reducing the variance in two-sample testing (Figure 2) Regarding my criticisms, 1) The conceptual advances in this submission are modest: the use of MMD to train generative models, as well as the importance of variance-reduction in MMD were already known. 2) The quadratic-time MMD test may lead to a discriminator that is *too good* in practical applications. Since MMD quickly picks up on pixel-level artifacts, I wonder if its use would be possible to train generators properly on realistic (non-binarized) data. This of course could be addressed by regularizing (smoothing) the kernel bandwidths, and for sure raises an interesting question/trade-off in generative modeling. Overall, the submission is technically sound and well-written: I recommend it for publication in ICLR 2017. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGenerative Models and Model Criticism via Optimized Maximum Mean Discrepancy\nWe propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepancy (MMD). This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples. Second, the MMD can be used to evaluate the performance of a generative model, by testing the model’s samples against a reference data set. In the latter role, the optimized MMD is particularly helpful, as it gives an interpretable indication of how the model and data distributions differ, even in cases where individual model samples are not easily distinguished either by eye or by classifier.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is an interesting paper containing three contributions: 1) An expression for the variance of the quadratic-time MMD estimate, which can be efficiently minimized for kernel selection. 2) Advanced computational optimizations for permutation tests for the quadratic MMD statistic. 3) Crystal-clear examples on the importance of reducing the variance in two-sample testing (Figure 2) Regarding my criticisms, 1) The conceptual advances in this submission are modest: the use of MMD to train generative models, as well as the importance of variance-reduction in MMD were already known. 2) The quadratic-time MMD test may lead to a discriminator that is *too good* in practical applications. Since MMD quickly picks up on pixel-level artifacts, I wonder if its use would be possible to train generators properly on realistic (non-binarized) data. This of course could be addressed by regularizing (smoothing) the kernel bandwidths, and for sure raises an interesting question/trade-off in generative modeling. Overall, the submission is technically sound and well-written: I recommend it for publication in ICLR 2017.",
"role": "assistant"
}
]
| [
"This is an interesting paper containing three contributions: 1) An expression for the variance of the quadratic-time MMD estimate, which can be efficiently minimized for kernel selection.",
"2) Advanced computational optimizations for permutation tests for the quadratic MMD statistic.",
"3) Crystal-clear examples on the importance of reducing the variance in two-sample testing (Figure 2) Regarding my criticisms, 1) The conceptual advances in this submission are modest: the use of MMD to train generative models, as well as the importance of variance-reduction in MMD were already known.",
"2) The quadratic-time MMD test may lead to a discriminator that is *too good* in practical applications.",
"Since MMD quickly picks up on pixel-level artifacts, I wonder if its use would be possible to train generators properly on realistic (non-binarized) data.",
"This of course could be addressed by regularizing (smoothing) the kernel bandwidths, and for sure raises an interesting question/trade-off in generative modeling.",
"Overall, the submission is technically sound and well-written: I recommend it for publication in ICLR 2017."
] | {
"criticism": 0,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 7
} | 0 | 0.142857 | 0.285714 | 0.857143 | 0.428571 | 0.142857 | 0.285714 | 0.285714 | 7 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.428571 | 1.860404 | 0.568167 |
Generalizing Skills with Semi-Supervised Reinforcement Learning | Deep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of labeled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while it is deployed and performing useful tasks. However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Such reward functions are often hard to measure in the real world, especially in domains such as robotics and dialog systems, where the reward could depend on the unknown positions of objects or the emotional state of the user. On the other hand, it is often quite practical to provide the agent with reward functions in a limited set of situations, such as when a human supervisor is present, or in a controlled laboratory setting. Can we make use of this limited supervision, and still benefit from the breadth of experience an agent might collect in the unstructured real world? In this paper, we formalize this problem setting as semi-supervised reinforcement learning (SSRL), where the reward function can only be evaluated in a set of “labeled” MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of “unlabeled” MDPs, by using experience from both settings. Our proposed method infers the task objective in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent’s own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging, continuous control tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using experience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward. | The paper proposes to study the problem of semi-supervised RL where one has to distinguish between labelled MDPs that provide rewards, and unlabelled MDPs that are not associated with any reward signal. The underlying is very simple since it aims at simultaneously learning a policy based on the REINFORCE+entropy regularization technique, and also a model of the reward that will be used (as in inverse reinforcement learning) as a feedback over unlabelled MDPs. The experiments are made on different continous domains and show interesting results The paper is well written, and easy to understand. It is based on a simple but efficient idea of simultaneously learning the policy and a model of the reward and the resulting algorithm exhibit interesting properties. The proposed idea is quite obvious, but the authors are the first ones to propose to test such a model. The experiments could be made stronger by mixing continuous and discrete problems but are convincing. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGeneralizing Skills with Semi-Supervised Reinforcement Learning\nDeep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of labeled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while it is deployed and performing useful tasks. However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Such reward functions are often hard to measure in the real world, especially in domains such as robotics and dialog systems, where the reward could depend on the unknown positions of objects or the emotional state of the user. On the other hand, it is often quite practical to provide the agent with reward functions in a limited set of situations, such as when a human supervisor is present, or in a controlled laboratory setting. Can we make use of this limited supervision, and still benefit from the breadth of experience an agent might collect in the unstructured real world? In this paper, we formalize this problem setting as semi-supervised reinforcement learning (SSRL), where the reward function can only be evaluated in a set of “labeled” MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of “unlabeled” MDPs, by using experience from both settings. Our proposed method infers the task objective in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent’s own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging, continuous control tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using experience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes to study the problem of semi-supervised RL where one has to distinguish between labelled MDPs that provide rewards, and unlabelled MDPs that are not associated with any reward signal. The underlying is very simple since it aims at simultaneously learning a policy based on the REINFORCE+entropy regularization technique, and also a model of the reward that will be used (as in inverse reinforcement learning) as a feedback over unlabelled MDPs. The experiments are made on different continous domains and show interesting results The paper is well written, and easy to understand. It is based on a simple but efficient idea of simultaneously learning the policy and a model of the reward and the resulting algorithm exhibit interesting properties. The proposed idea is quite obvious, but the authors are the first ones to propose to test such a model. The experiments could be made stronger by mixing continuous and discrete problems but are convincing.",
"role": "assistant"
}
]
| [
"The paper proposes to study the problem of semi-supervised RL where one has to distinguish between labelled MDPs that provide rewards, and unlabelled MDPs that are not associated with any reward signal.",
"The underlying is very simple since it aims at simultaneously learning a policy based on the REINFORCE+entropy regularization technique, and also a model of the reward that will be used (as in inverse reinforcement learning) as a feedback over unlabelled MDPs.",
"The experiments are made on different continous domains and show interesting results The paper is well written, and easy to understand.",
"It is based on a simple but efficient idea of simultaneously learning the policy and a model of the reward and the resulting algorithm exhibit interesting properties.",
"The proposed idea is quite obvious, but the authors are the first ones to propose to test such a model.",
"The experiments could be made stronger by mixing continuous and discrete problems but are convincing."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 6
} | 0 | 0 | 0.333333 | 0.833333 | 0.5 | 0.166667 | 0.333333 | 0.166667 | 6 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 1.559994 | 0.773339 |
Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the final objective value. | This papers adds to the literature on learning optimizers/algorithms that has gained popularity recently. The authors choose to use the framework of guided policy search at the meta-level to train the optimizers. They also opt to train on random objectives and assess transfer to a few simple tasks. As pointed below, this is a useful addition. However, the argument of using RL vs gradients at the meta-level that appears below is not clear or convincing. I urge the authors to run an experiment comparing the two approaches and to present comparative results. This is a very important question, and the scalability of this approach could very well hinge on this fact. Indeed, demonstrating both scaling to large domains and transfer to those domains is the key challenge in this domain. In summary, the idea is a good one, but the experiments are weak. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning to Optimize\nAlgorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the final objective value.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis papers adds to the literature on learning optimizers/algorithms that has gained popularity recently. The authors choose to use the framework of guided policy search at the meta-level to train the optimizers. They also opt to train on random objectives and assess transfer to a few simple tasks. As pointed below, this is a useful addition. However, the argument of using RL vs gradients at the meta-level that appears below is not clear or convincing. I urge the authors to run an experiment comparing the two approaches and to present comparative results. This is a very important question, and the scalability of this approach could very well hinge on this fact. Indeed, demonstrating both scaling to large domains and transfer to those domains is the key challenge in this domain. In summary, the idea is a good one, but the experiments are weak.",
"role": "assistant"
}
]
| [
"This papers adds to the literature on learning optimizers/algorithms that has gained popularity recently.",
"The authors choose to use the framework of guided policy search at the meta-level to train the optimizers.",
"They also opt to train on random objectives and assess transfer to a few simple tasks.",
"As pointed below, this is a useful addition.",
"However, the argument of using RL vs gradients at the meta-level that appears below is not clear or convincing.",
"I urge the authors to run an experiment comparing the two approaches and to present comparative results.",
"This is a very important question, and the scalability of this approach could very well hinge on this fact.",
"Indeed, demonstrating both scaling to large domains and transfer to those domains is the key challenge in this domain.",
"In summary, the idea is a good one, but the experiments are weak."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 8,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 9
} | 0.222222 | 0 | 0.444444 | 0.888889 | 0.444444 | 0.111111 | 0.222222 | 0.222222 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.555556 | 2.303037 | 0.252519 |
Training Compressed Fully-Connected Networks with a Density-Diversity Penalty | Deep models have achieved great success on a variety of challenging tasks. How- ever, the models that achieve great performance often have an enormous number of parameters, leading to correspondingly great demands on both computational and memory resources, especially for fully-connected layers. In this work, we propose a new “density-diversity penalty” regularizer that can be applied to fully-connected layers of neural networks during training. We show that using this regularizer results in significantly fewer parameters (i.e., high sparsity), and also significantly fewer distinct values (i.e., low diversity), so that the trained weight matrices can be highly compressed without any appreciable loss in performance. The resulting trained models can hence reside on computational platforms (e.g., portables, Internet-of-Things devices) where it otherwise would be prohibitive. | The method proposes to compress the weight matrices of deep networks using a new density-diversity penalty together with a computing trick (sorting weights) to make computation affordable and a strategy of tying weights. This density-diversity penalty consists of an added cost corresponding to the l2-norm of the weights (density) and the l1-norm of all the pairwise differences in a layer. Regularly, the most frequent value in the weight matrix is set to zero to encourage sparsity. As weights collapse to the same values with the diversity penalty, they are tied together and then updated using the averaged gradient. The training process then alternates between training with 1. the density-diversity penalty and untied weights, and 2. training without this penalty but with tied weights. The experiments on two datasets (MNIST for vision and TIMIT for speech) shows that the method achieves very good compression rates without loss of performance. The paper is presented very clearly, presents very interesting ideas and seems to be state of the art for compression. The approach opens many new avenues of research and the strategy of weight-tying may be of great interest outside of the compression domain to learn regularities in data. The result tables are a bit confusing unfortunately. minor issues: p1 english mistake: “while networks *that* consist of convolutional layers”. p6-p7 Table 1,2,3 are confusing. Compared to the baseline (DC), your method (DP) seems to perform worse: In Table 1 overall, Table 2 overall FC, Table 3 overall, DP is less sparse and more diverse than the DC baseline. This would suggest a worse compression rate for DP and is inconsistent with the text which says they should be similar or better. I assume the sparsity value is inverted and that you in fact report the number of non-modal values as a fraction of the total. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining Compressed Fully-Connected Networks with a Density-Diversity Penalty\nDeep models have achieved great success on a variety of challenging tasks. How- ever, the models that achieve great performance often have an enormous number of parameters, leading to correspondingly great demands on both computational and memory resources, especially for fully-connected layers. In this work, we propose a new “density-diversity penalty” regularizer that can be applied to fully-connected layers of neural networks during training. We show that using this regularizer results in significantly fewer parameters (i.e., high sparsity), and also significantly fewer distinct values (i.e., low diversity), so that the trained weight matrices can be highly compressed without any appreciable loss in performance. The resulting trained models can hence reside on computational platforms (e.g., portables, Internet-of-Things devices) where it otherwise would be prohibitive.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe method proposes to compress the weight matrices of deep networks using a new density-diversity penalty together with a computing trick (sorting weights) to make computation affordable and a strategy of tying weights. This density-diversity penalty consists of an added cost corresponding to the l2-norm of the weights (density) and the l1-norm of all the pairwise differences in a layer. Regularly, the most frequent value in the weight matrix is set to zero to encourage sparsity. As weights collapse to the same values with the diversity penalty, they are tied together and then updated using the averaged gradient. The training process then alternates between training with 1. the density-diversity penalty and untied weights, and 2. training without this penalty but with tied weights. The experiments on two datasets (MNIST for vision and TIMIT for speech) shows that the method achieves very good compression rates without loss of performance. The paper is presented very clearly, presents very interesting ideas and seems to be state of the art for compression. The approach opens many new avenues of research and the strategy of weight-tying may be of great interest outside of the compression domain to learn regularities in data. The result tables are a bit confusing unfortunately. minor issues: p1 english mistake: “while networks *that* consist of convolutional layers”. p6-p7 Table 1,2,3 are confusing. Compared to the baseline (DC), your method (DP) seems to perform worse: In Table 1 overall, Table 2 overall FC, Table 3 overall, DP is less sparse and more diverse than the DC baseline. This would suggest a worse compression rate for DP and is inconsistent with the text which says they should be similar or better. I assume the sparsity value is inverted and that you in fact report the number of non-modal values as a fraction of the total.",
"role": "assistant"
}
]
| [
"The method proposes to compress the weight matrices of deep networks using a new density-diversity penalty together with a computing trick (sorting weights) to make computation affordable and a strategy of tying weights.",
"This density-diversity penalty consists of an added cost corresponding to the l2-norm of the weights (density) and the l1-norm of all the pairwise differences in a layer.",
"Regularly, the most frequent value in the weight matrix is set to zero to encourage sparsity.",
"As weights collapse to the same values with the diversity penalty, they are tied together and then updated using the averaged gradient.",
"The training process then alternates between training with 1. the density-diversity penalty and untied weights, and 2. training without this penalty but with tied weights.",
"The experiments on two datasets (MNIST for vision and TIMIT for speech) shows that the method achieves very good compression rates without loss of performance.",
"The paper is presented very clearly, presents very interesting ideas and seems to be state of the art for compression.",
"The approach opens many new avenues of research and the strategy of weight-tying may be of great interest outside of the compression domain to learn regularities in data.",
"The result tables are a bit confusing unfortunately.",
"minor issues: p1 english mistake: “while networks *that* consist of convolutional layers”.",
"p6-p7 Table 1,2,3 are confusing.",
"Compared to the baseline (DC), your method (DP) seems to perform worse: In Table 1 overall, Table 2 overall FC, Table 3 overall, DP is less sparse and more diverse than the DC baseline.",
"This would suggest a worse compression rate for DP and is inconsistent with the text which says they should be similar or better.",
"I assume the sparsity value is inverted and that you in fact report the number of non-modal values as a fraction of the total."
] | {
"criticism": 5,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 3,
"presentation_and_reporting": 4,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 14
} | 0.357143 | 0.142857 | 0.142857 | 0.642857 | 0.214286 | 0.285714 | 0.285714 | 0.071429 | 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 2.127075 | 0.015782 |
Optimal Binary Autoencoding with Pairwise Correlations | We formulate learning of a binary autoencoder as a biconvex optimization problem which learns from the pairwise correlations between encoded and decoded bits. Among all possible algorithms that use this information, ours finds the autoencoder that reconstructs its inputs with worst-case optimal loss. The optimal decoder is a single layer of artificial neurons, emerging entirely from the minimax loss minimization, and with weights learned by convex optimization. All this is reflected in competitive experimental results, demonstrating that binary autoencoding can be done efficiently by conveying information in pairwise correlations in an optimal fashion. | The paper presents a novel look at binary auto-encoders, formulating the objective function as a min-max reconstruction error over a training set given the observed intermediate representations. The author shows that this formulation leads to a bi-convex problem that can be solved by alternating minimisation methods; this part is non-trivial and is the main contribution of the paper. Proof-of-concept experiments are performed, showing improvements for 1-hidden layer auto-encoders with respect to a vanilla approach. The experimental section is fairly weak because the literature on auto-encoders is huge and many variants were shown to perform better than straightforward approaches without being more complicated (e.g., denoising auto-encoders). Yet, the paper presents an analysis that leads to a new learning algorithm for an old problem, and is likely worth discussing. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOptimal Binary Autoencoding with Pairwise Correlations\nWe formulate learning of a binary autoencoder as a biconvex optimization problem which learns from the pairwise correlations between encoded and decoded bits. Among all possible algorithms that use this information, ours finds the autoencoder that reconstructs its inputs with worst-case optimal loss. The optimal decoder is a single layer of artificial neurons, emerging entirely from the minimax loss minimization, and with weights learned by convex optimization. All this is reflected in competitive experimental results, demonstrating that binary autoencoding can be done efficiently by conveying information in pairwise correlations in an optimal fashion.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents a novel look at binary auto-encoders, formulating the objective function as a min-max reconstruction error over a training set given the observed intermediate representations. The author shows that this formulation leads to a bi-convex problem that can be solved by alternating minimisation methods; this part is non-trivial and is the main contribution of the paper. Proof-of-concept experiments are performed, showing improvements for 1-hidden layer auto-encoders with respect to a vanilla approach. The experimental section is fairly weak because the literature on auto-encoders is huge and many variants were shown to perform better than straightforward approaches without being more complicated (e.g., denoising auto-encoders). Yet, the paper presents an analysis that leads to a new learning algorithm for an old problem, and is likely worth discussing.",
"role": "assistant"
}
]
| [
"The paper presents a novel look at binary auto-encoders, formulating the objective function as a min-max reconstruction error over a training set given the observed intermediate representations.",
"The author shows that this formulation leads to a bi-convex problem that can be solved by alternating minimisation methods; this part is non-trivial and is the main contribution of the paper.",
"Proof-of-concept experiments are performed, showing improvements for 1-hidden layer auto-encoders with respect to a vanilla approach.",
"The experimental section is fairly weak because the literature on auto-encoders is huge and many variants were shown to perform better than straightforward approaches without being more complicated (e.g., denoising auto-encoders).",
"Yet, the paper presents an analysis that leads to a new learning algorithm for an old problem, and is likely worth discussing."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 5
} | 0.4 | 0 | 0.4 | 1 | 0.2 | 0 | 0.4 | 0.2 | 5 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.6 | 1.589924 | 1.010076 |
On the Quantitative Analysis of Decoder-Based Generative Models | The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of some popular models such as generative adversarial networks and generative moment matching networks, is a decoder network, a parametric deep neural net that defines a generative distribution. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution. | Summary: This paper describes how to estimate log-likelihoods of currently popular decoder-based generative models using annealed importance sampling (AIS) and HMC. It validates the method using bidirectional Monte Carlo on the example of MNIST, and compares the performance of GANs and VAEs. Review: Although this seems like a fairly straight-forward application of AIS to me (correct me if I missed an important trick to make this work), I very much appreciate the educational value and empirical contributions of this paper. It should lead to clarity in debates around the density estimation performance of GANs, and should enable more people to use AIS. Space permitting, it might be a good idea to try to expand the description of AIS. All the components of AIS are mentioned and a basic description of the algorithm is given, but the paper doesn’t explain well “why” the algorithm does what it does/why it works. I was initially confused by the widely different numbers in Figure 2. On first glance my expectation was that this Figure is comparing GAN, GMMN and IWAE (because of the labeling at the bottom and because of the leading words in the caption’s descriptions). Perhaps mention in the caption that (a) and (b) use continuous MNIST and (c) uses discrete MNIST. “GMMN-50” should probably be “GMMN-10”. Using reconstructions for evaluation of models may be a necessary but is not sufficient condition for a good model. Depending on the likelihood, a posterior sample might have very low density under the prior, for example. It would be great if the authors could point out and discuss the limitations of this test a bit more. Minor: Perhaps add a reference to MacKay’s density networks (MacKay, 1995) for decoder-based generative models. In Section 2.2, the authors write “the prior over z can be drastically different than the true posterior p(z|x), especially in high dimension”. I think the flow of the paper could be improved here, especially for people less familiar with importance sampling/AIS. I don’t think the relevance of the posterior for importance sampling is clear at this point in the paper. In Section 2.3 the authors claim that is often more “meaningful” to estimate p(x) in log-space because of underflow problems. “Meaningful” seems like the wrong word here. Perhaps revise to say that it’s more practical to estimate log p(x) because of underflow problems, or to say that it’s more meaningful to estimate log p(x) because of its connection to compression/surprise/entropy. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the Quantitative Analysis of Decoder-Based Generative Models\nThe past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of some popular models such as generative adversarial networks and generative moment matching networks, is a decoder network, a parametric deep neural net that defines a generative distribution. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary: This paper describes how to estimate log-likelihoods of currently popular decoder-based generative models using annealed importance sampling (AIS) and HMC. It validates the method using bidirectional Monte Carlo on the example of MNIST, and compares the performance of GANs and VAEs. Review: Although this seems like a fairly straight-forward application of AIS to me (correct me if I missed an important trick to make this work), I very much appreciate the educational value and empirical contributions of this paper. It should lead to clarity in debates around the density estimation performance of GANs, and should enable more people to use AIS. Space permitting, it might be a good idea to try to expand the description of AIS. All the components of AIS are mentioned and a basic description of the algorithm is given, but the paper doesn’t explain well “why” the algorithm does what it does/why it works. I was initially confused by the widely different numbers in Figure 2. On first glance my expectation was that this Figure is comparing GAN, GMMN and IWAE (because of the labeling at the bottom and because of the leading words in the caption’s descriptions). Perhaps mention in the caption that (a) and (b) use continuous MNIST and (c) uses discrete MNIST. “GMMN-50” should probably be “GMMN-10”. Using reconstructions for evaluation of models may be a necessary but is not sufficient condition for a good model. Depending on the likelihood, a posterior sample might have very low density under the prior, for example. It would be great if the authors could point out and discuss the limitations of this test a bit more. Minor: Perhaps add a reference to MacKay’s density networks (MacKay, 1995) for decoder-based generative models. In Section 2.2, the authors write “the prior over z can be drastically different than the true posterior p(z|x), especially in high dimension”. I think the flow of the paper could be improved here, especially for people less familiar with importance sampling/AIS. I don’t think the relevance of the posterior for importance sampling is clear at this point in the paper. In Section 2.3 the authors claim that is often more “meaningful” to estimate p(x) in log-space because of underflow problems. “Meaningful” seems like the wrong word here. Perhaps revise to say that it’s more practical to estimate log p(x) because of underflow problems, or to say that it’s more meaningful to estimate log p(x) because of its connection to compression/surprise/entropy.",
"role": "assistant"
}
]
| [
"Summary: This paper describes how to estimate log-likelihoods of currently popular decoder-based generative models using annealed importance sampling (AIS) and HMC.",
"It validates the method using bidirectional Monte Carlo on the example of MNIST, and compares the performance of GANs and VAEs.",
"Review: Although this seems like a fairly straight-forward application of AIS to me (correct me if I missed an important trick to make this work), I very much appreciate the educational value and empirical contributions of this paper.",
"It should lead to clarity in debates around the density estimation performance of GANs, and should enable more people to use AIS.",
"Space permitting, it might be a good idea to try to expand the description of AIS.",
"All the components of AIS are mentioned and a basic description of the algorithm is given, but the paper doesn’t explain well “why” the algorithm does what it does/why it works.",
"I was initially confused by the widely different numbers in Figure 2.",
"On first glance my expectation was that this Figure is comparing GAN, GMMN and IWAE (because of the labeling at the bottom and because of the leading words in the caption’s descriptions).",
"Perhaps mention in the caption that (a) and (b) use continuous MNIST and (c) uses discrete MNIST.",
"“GMMN-50” should probably be “GMMN-10”.",
"Using reconstructions for evaluation of models may be a necessary but is not sufficient condition for a good model.",
"Depending on the likelihood, a posterior sample might have very low density under the prior, for example.",
"It would be great if the authors could point out and discuss the limitations of this test a bit more.",
"Minor: Perhaps add a reference to MacKay’s density networks (MacKay, 1995) for decoder-based generative models.",
"In Section 2.2, the authors write “the prior over z can be drastically different than the true posterior p(z|x), especially in high dimension”.",
"I think the flow of the paper could be improved here, especially for people less familiar with importance sampling/AIS.",
"I don’t think the relevance of the posterior for importance sampling is clear at this point in the paper.",
"In Section 2.3 the authors claim that is often more “meaningful” to estimate p(x) in log-space because of underflow problems.",
"“Meaningful” seems like the wrong word here.",
"Perhaps revise to say that it’s more practical to estimate log p(x) because of underflow problems, or to say that it’s more meaningful to estimate log p(x) because of its connection to compression/surprise/entropy."
] | {
"criticism": 5,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 12,
"praise": 4,
"presentation_and_reporting": 10,
"results_and_discussion": 6,
"suggestion_and_solution": 9,
"total": 20
} | 0.25 | 0.1 | 0.05 | 0.6 | 0.2 | 0.5 | 0.3 | 0.45 | 20 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.45 | 1.676661 | 0.773339 |
Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications. However, the deployment of large neural networks models can be difficult to deploy on mobile devices with limited power budgets. To solve this problem, we propose Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from scratch, which means it’s as easy as to train normal full precision model. We highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16× smaller than full- precision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method outperform full-precision models of ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and outperforms previous ternary models by 3%. | This paper studies in depth the idea of quantizing down convolutional layers to 3 bits, with a different positive and negative per-layer scale. It goes on to provide an exhaustive analysis of performance (essentially no loss) on real benchmarks (this paper is remarkably MNIST-free). The relevance of this paper is that it likely provides a lower bound on quantization approaches that don*t sacrifice any performance, and hence can plausibly become the approach of choice for resource-constrained inference, and might suggest new hardware designs to take advantage of the proposed structure. Furthermore, the paper provides power measurements, which is really the main metric that anyone working seriously in that space cares about. (Nit: I don*t see measurements for the full-precision baseline). I would have loved to see a SOTA result on ImageNet and a result on a strong LSTM baseline to be fully convinced. I would have also liked to see discussion of the wall time to result using this training procedure. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTrained Ternary Quantization\nDeep neural networks are widely used in machine learning applications. However, the deployment of large neural networks models can be difficult to deploy on mobile devices with limited power budgets. To solve this problem, we propose Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from scratch, which means it’s as easy as to train normal full precision model. We highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16× smaller than full- precision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method outperform full-precision models of ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and outperforms previous ternary models by 3%.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper studies in depth the idea of quantizing down convolutional layers to 3 bits, with a different positive and negative per-layer scale. It goes on to provide an exhaustive analysis of performance (essentially no loss) on real benchmarks (this paper is remarkably MNIST-free). The relevance of this paper is that it likely provides a lower bound on quantization approaches that don*t sacrifice any performance, and hence can plausibly become the approach of choice for resource-constrained inference, and might suggest new hardware designs to take advantage of the proposed structure. Furthermore, the paper provides power measurements, which is really the main metric that anyone working seriously in that space cares about. (Nit: I don*t see measurements for the full-precision baseline). I would have loved to see a SOTA result on ImageNet and a result on a strong LSTM baseline to be fully convinced. I would have also liked to see discussion of the wall time to result using this training procedure.",
"role": "assistant"
}
]
| [
"This paper studies in depth the idea of quantizing down convolutional layers to 3 bits, with a different positive and negative per-layer scale.",
"It goes on to provide an exhaustive analysis of performance (essentially no loss) on real benchmarks (this paper is remarkably MNIST-free).",
"The relevance of this paper is that it likely provides a lower bound on quantization approaches that don*t sacrifice any performance, and hence can plausibly become the approach of choice for resource-constrained inference, and might suggest new hardware designs to take advantage of the proposed structure.",
"Furthermore, the paper provides power measurements, which is really the main metric that anyone working seriously in that space cares about.",
"(Nit: I don*t see measurements for the full-precision baseline).",
"I would have loved to see a SOTA result on ImageNet and a result on a strong LSTM baseline to be fully convinced.",
"I would have also liked to see discussion of the wall time to result using this training procedure."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 6,
"praise": 0,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 7
} | 0.142857 | 0 | 0.428571 | 0.857143 | 0 | 0 | 0.428571 | 0.285714 | 7 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
DSD: Dense-Sparse-Dense Training for Deep Neural Networks | Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD. | Training highly non-convex deep neural networks is a very important practical problem, and this paper provides a great exploration of an interesting new idea for more effective training. The empirical evaluation both in the paper itself and in the authors’ comments during discussion convincingly demonstrates that the method achieves consistent improvements in accuracy across multiple architectures, tasks and datasets. The algorithm is very simple (alternating between training the full dense network and a sparse version of it), which is actually a positive since that means it may get adapted in practice by the research community. The paper should be revised to incorporate the additional experiments and comments from the discussion, particularly the accuracy comparisons with the same number of epochs. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDSD: Dense-Sparse-Dense Training for Deep Neural Networks\nModern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nTraining highly non-convex deep neural networks is a very important practical problem, and this paper provides a great exploration of an interesting new idea for more effective training. The empirical evaluation both in the paper itself and in the authors’ comments during discussion convincingly demonstrates that the method achieves consistent improvements in accuracy across multiple architectures, tasks and datasets. The algorithm is very simple (alternating between training the full dense network and a sparse version of it), which is actually a positive since that means it may get adapted in practice by the research community. The paper should be revised to incorporate the additional experiments and comments from the discussion, particularly the accuracy comparisons with the same number of epochs.",
"role": "assistant"
}
]
| [
"Training highly non-convex deep neural networks is a very important practical problem, and this paper provides a great exploration of an interesting new idea for more effective training.",
"The empirical evaluation both in the paper itself and in the authors’ comments during discussion convincingly demonstrates that the method achieves consistent improvements in accuracy across multiple architectures, tasks and datasets.",
"The algorithm is very simple (alternating between training the full dense network and a sparse version of it), which is actually a positive since that means it may get adapted in practice by the research community.",
"The paper should be revised to incorporate the additional experiments and comments from the discussion, particularly the accuracy comparisons with the same number of epochs."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 4
} | 0 | 0 | 0.25 | 0.75 | 0.5 | 0 | 0.5 | 0.25 | 4 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.25 | 0.971623 | 1.278377 |
Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks | Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural sequence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, applications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-of-the-art on several well-studied tasks. | The authors propose transfer learning variants for neural-net-based models, applied to a bunch of NLP tagging tasks. The field of multi-tasking is huge, and the approaches proposed here do not seem to be very novel in terms of machine learning: parts of a general architecture for NLP are shared, the amount of shared *layers* being dependent of the task of interest. The novelty lies in the type of architecture which is used in the particular setup of NLP tagging tasks. The experimental results show that the approach seems to work well when there is not much labeled data available (Figure 2). Table 3 show some limited improvement at full scale. Figure 2 results are debatable though: it seems the authors fixed the architecture size while varying the amount of labeled data; it is very likely that tuning the architecture for each size would have led to better results. Overall, while the paper reads well, the novelty seems a bit limited and the experimental section seems a bit disappointing. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTransfer Learning for Sequence Tagging with Hierarchical Recurrent Networks\nRecent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural sequence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, applications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-of-the-art on several well-studied tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose transfer learning variants for neural-net-based models, applied to a bunch of NLP tagging tasks. The field of multi-tasking is huge, and the approaches proposed here do not seem to be very novel in terms of machine learning: parts of a general architecture for NLP are shared, the amount of shared *layers* being dependent of the task of interest. The novelty lies in the type of architecture which is used in the particular setup of NLP tagging tasks. The experimental results show that the approach seems to work well when there is not much labeled data available (Figure 2). Table 3 show some limited improvement at full scale. Figure 2 results are debatable though: it seems the authors fixed the architecture size while varying the amount of labeled data; it is very likely that tuning the architecture for each size would have led to better results. Overall, while the paper reads well, the novelty seems a bit limited and the experimental section seems a bit disappointing.",
"role": "assistant"
}
]
| [
"The authors propose transfer learning variants for neural-net-based models, applied to a bunch of NLP tagging tasks.",
"The field of multi-tasking is huge, and the approaches proposed here do not seem to be very novel in terms of machine learning: parts of a general architecture for NLP are shared, the amount of shared *layers* being dependent of the task of interest.",
"The novelty lies in the type of architecture which is used in the particular setup of NLP tagging tasks.",
"The experimental results show that the approach seems to work well when there is not much labeled data available (Figure 2).",
"Table 3 show some limited improvement at full scale.",
"Figure 2 results are debatable though: it seems the authors fixed the architecture size while varying the amount of labeled data; it is very likely that tuning the architecture for each size would have led to better results.",
"Overall, while the paper reads well, the novelty seems a bit limited and the experimental section seems a bit disappointing."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 7
} | 0.428571 | 0.142857 | 0.142857 | 1 | 0.285714 | 0.285714 | 0.285714 | 0 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.571429 | 2.003261 | 0.568167 |
Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks | Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural sequence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, applications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-of-the-art on several well-studied tasks. | This paper presents a clear hierarchical taxonomy of transfer learning methods as applicable to sequence tagging problems. This contextualizes and unifies previous work on specific instances of this taxonomy. Moreover, the paper shows that previously unexplored places in this taxonomy are competitive with or superior to the state of the art in key benchmark problems. It*d be nice to see this explored further, such as highlighting what is the loss as you move from the more restrictive to the less restrictive transfer learning approaches, but I believe this paper is interesting and acceptable as-is. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTransfer Learning for Sequence Tagging with Hierarchical Recurrent Networks\nRecent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural sequence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, applications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-of-the-art on several well-studied tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a clear hierarchical taxonomy of transfer learning methods as applicable to sequence tagging problems. This contextualizes and unifies previous work on specific instances of this taxonomy. Moreover, the paper shows that previously unexplored places in this taxonomy are competitive with or superior to the state of the art in key benchmark problems. It*d be nice to see this explored further, such as highlighting what is the loss as you move from the more restrictive to the less restrictive transfer learning approaches, but I believe this paper is interesting and acceptable as-is.",
"role": "assistant"
}
]
| [
"This paper presents a clear hierarchical taxonomy of transfer learning methods as applicable to sequence tagging problems.",
"This contextualizes and unifies previous work on specific instances of this taxonomy.",
"Moreover, the paper shows that previously unexplored places in this taxonomy are competitive with or superior to the state of the art in key benchmark problems.",
"It*d be nice to see this explored further, such as highlighting what is the loss as you move from the more restrictive to the less restrictive transfer learning approaches, but I believe this paper is interesting and acceptable as-is."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 4
} | 0 | 0 | 0.5 | 1 | 0.5 | 0 | 0 | 0.25 | 4 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 2.25 | 0.971623 | 1.278377 |
Words or Characters? Fine-grained Gating for Reading Comprehension | Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children*s Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task. | I think the problem here is well motivated, the approach is insightful and intuitive, and the results are convincing of the approach (although lacking in variety of applications). I like the fact that the authors use POS and NER in terms of an intermediate signal for the decision. Also they compare against a sufficient range of baselines to show the effectiveness of the proposed model. I am also convinced by the authors* answers to my question, I think there is sufficient evidence provided in the results to show the effectiveness of the inductive bias introduced by the fine-grained gating model. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWords or Characters? Fine-grained Gating for Reading Comprehension\nPrevious work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children*s Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI think the problem here is well motivated, the approach is insightful and intuitive, and the results are convincing of the approach (although lacking in variety of applications). I like the fact that the authors use POS and NER in terms of an intermediate signal for the decision. Also they compare against a sufficient range of baselines to show the effectiveness of the proposed model. I am also convinced by the authors* answers to my question, I think there is sufficient evidence provided in the results to show the effectiveness of the inductive bias introduced by the fine-grained gating model.",
"role": "assistant"
}
]
| [
"I think the problem here is well motivated, the approach is insightful and intuitive, and the results are convincing of the approach (although lacking in variety of applications).",
"I like the fact that the authors use POS and NER in terms of an intermediate signal for the decision.",
"Also they compare against a sufficient range of baselines to show the effectiveness of the proposed model.",
"I am also convinced by the authors* answers to my question, I think there is sufficient evidence provided in the results to show the effectiveness of the inductive bias introduced by the fine-grained gating model."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 0,
"total": 4
} | 0.25 | 0 | 0.25 | 1 | 0.25 | 0 | 0.75 | 0 | 4 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 2.5 | 1.221623 | 1.278377 |
A Simple but Tough-to-Beat Baseline for Sentence Embeddings | The success of neural network methods for computing word embeddings has motivated methods for generating semantic embeddings of longer pieces of text, such as sentences and paragraphs. Surprisingly, Wieting et al (ICLR*16) showed that such complicated methods are outperformed, especially in out-of-domain (transfer learning) settings, by simpler methods involving mild retraining of word embeddings and basic linear regression. The method of Wieting et al. requires retraining with a substantial labeled dataset such as Paraphrase Database (Ganitkevitch et al., 2013). The current paper goes further, showing that the following completely unsupervised sentence embedding is a formidable baseline: Use word embeddings computed using one of the popular methods on unlabeled corpus like Wikipedia, represent the sentence by a weighted average of the word vectors, and then modify them a bit using PCA/SVD. This weighting improves performance by about 10% to 30% in textual similarity tasks, and beats sophisticated supervised methods including RNN*s and LSTM*s. It even improves Wieting et al.*s embeddings. This simple method should be used as the baseline to beat in future, especially when labeled training data is scarce or nonexistent. The paper also gives a theoretical explanation of the success of the above unsupervised method using a latent variable generative model for sentences, which is a simple extension of the model in Arora et al. (TACL*16) with new *smoothing* terms that allow for words occurring out of context, as well as high probabilities for words like and, not in all contexts. | This paper presents a new theoretically-principled method of representing sentences as vectors. The experiments show that vectors produced by this method perform well on similarity and entailment benchmarks, surpassing some RNN-based methods too. Overall, this is an interesting empirical result, especially since the model is not order-sensitive (as far as I can tell). I would like to see some more discussion on why such a simple model does better than LSTMs at capturing similarity and entailment. Could this be an artifact of these benchmarks? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Simple but Tough-to-Beat Baseline for Sentence Embeddings\nThe success of neural network methods for computing word embeddings has motivated methods for generating semantic embeddings of longer pieces of text, such as sentences and paragraphs. Surprisingly, Wieting et al (ICLR*16) showed that such complicated methods are outperformed, especially in out-of-domain (transfer learning) settings, by simpler methods involving mild retraining of word embeddings and basic linear regression. The method of Wieting et al. requires retraining with a substantial labeled dataset such as Paraphrase Database (Ganitkevitch et al., 2013). The current paper goes further, showing that the following completely unsupervised sentence embedding is a formidable baseline: Use word embeddings computed using one of the popular methods on unlabeled corpus like Wikipedia, represent the sentence by a weighted average of the word vectors, and then modify them a bit using PCA/SVD. This weighting improves performance by about 10% to 30% in textual similarity tasks, and beats sophisticated supervised methods including RNN*s and LSTM*s. It even improves Wieting et al.*s embeddings. This simple method should be used as the baseline to beat in future, especially when labeled training data is scarce or nonexistent. The paper also gives a theoretical explanation of the success of the above unsupervised method using a latent variable generative model for sentences, which is a simple extension of the model in Arora et al. (TACL*16) with new *smoothing* terms that allow for words occurring out of context, as well as high probabilities for words like and, not in all contexts.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a new theoretically-principled method of representing sentences as vectors. The experiments show that vectors produced by this method perform well on similarity and entailment benchmarks, surpassing some RNN-based methods too. Overall, this is an interesting empirical result, especially since the model is not order-sensitive (as far as I can tell). I would like to see some more discussion on why such a simple model does better than LSTMs at capturing similarity and entailment. Could this be an artifact of these benchmarks?",
"role": "assistant"
}
]
| [
"This paper presents a new theoretically-principled method of representing sentences as vectors.",
"The experiments show that vectors produced by this method perform well on similarity and entailment benchmarks, surpassing some RNN-based methods too.",
"Overall, this is an interesting empirical result, especially since the model is not order-sensitive (as far as I can tell).",
"I would like to see some more discussion on why such a simple model does better than LSTMs at capturing similarity and entailment.",
"Could this be an artifact of these benchmarks?"
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 5
} | 0 | 0 | 0.4 | 1 | 0.4 | 0.2 | 0.4 | 0.2 | 5 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.6 | 1.589924 | 1.010076 |
Capacity and Trainability in Recurrent Neural Networks | Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures. | The authors investigate a variety of existing and two new RNN architectures to obtain more insight about the effectiveness at which these models can store task information in their parameters and activations. The experimental setups look sound. To generalize comparisons between different architectures it’s necessary to consider multiple tasks and control for the effect of the hyperparameters. This work uses multiple tasks of varying complexities, principled hyperparameter tuning methodology and a number of tuning iterations that can currently only be achieved by the computational resources of some of the larger industrial research groups. The descriptions of the models and the objective where very clear to me. The descriptions of the experiments and presentation of the results were not always clear to me at times, even with the additional details in the appendix available. Most of these issues can easily be resolved by editing the text. For example, in the memory task the scaling of the inputs (and hence also outputs) is not provided so it’s hard to interpret the squared error scores in Figure 2c. It’s not clear to me what the term ‘unrollings’ refers to in Figure 2b. Is this a time lag with additional hidden state updates between the presentation of the input sequence and the generation of the output? Since the perceptron capacity task is somewhat central to the paper, I think a slightly more precise description of how and when the predictions are computed would be helpful. Due to the large number of graphs, it can be somewhat hard to find the most relevant results. Perhaps some of the more obvious findings (like Figure 1(b-d) given Figure 1a) could move to the appendix to make space for more detailed task descriptions. Novelty is not really the aim of this paper since it mostly investigates existing architectures. To use the mutual information to obtain bits per parameter scores in highly non-linear parameterized functions is new to me. The paper also proposed to new architectures that seem to have practical value. The paper adds to the currently still somewhat neglected research effort to employ the larger computational resources we currently have towards a better understanding of architectures which were designed when such resources were not yet available. I’d argue that the paper is original enough for that reason alone. The paper provides some interesting new insights into the properties of RNNs. While observed before, it is interesting to see the importance of gated units for maintaining trainability of networks with many layers. It is also interesting to see a potential new use for vanilla RNNs for simpler tasks where a high capacity per parameter may be required due to hardware constraints. The proposed +RNN may turn out to have practical value as well and the hyperparameter robustness results shed some light on the popularity of certain architectures when limited time for HP tuning is available. The large body of results and hyperparameter analysis should be useful to many researchers who want to use RNNs in the future. All in all, I think this paper would make a valuable addition to the ICLR conference but would benefit from some improvements to the text. Pros: * Thorough analysis. * Seemingly proper experiments. * The way of quantifying capacity in neural networks adds to the novelty of the paper. * The results have some practical value and suggest similar analysis of other architectures. * The results provide useful insights into the relative merits of different RNN architectures. Cons: * It’s hard to isolate the most important findings (some plots seem redundant). * Some relevant experimental details are missing. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCapacity and Trainability in Recurrent Neural Networks\nTwo potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors investigate a variety of existing and two new RNN architectures to obtain more insight about the effectiveness at which these models can store task information in their parameters and activations. The experimental setups look sound. To generalize comparisons between different architectures it’s necessary to consider multiple tasks and control for the effect of the hyperparameters. This work uses multiple tasks of varying complexities, principled hyperparameter tuning methodology and a number of tuning iterations that can currently only be achieved by the computational resources of some of the larger industrial research groups. The descriptions of the models and the objective where very clear to me. The descriptions of the experiments and presentation of the results were not always clear to me at times, even with the additional details in the appendix available. Most of these issues can easily be resolved by editing the text. For example, in the memory task the scaling of the inputs (and hence also outputs) is not provided so it’s hard to interpret the squared error scores in Figure 2c. It’s not clear to me what the term ‘unrollings’ refers to in Figure 2b. Is this a time lag with additional hidden state updates between the presentation of the input sequence and the generation of the output? Since the perceptron capacity task is somewhat central to the paper, I think a slightly more precise description of how and when the predictions are computed would be helpful. Due to the large number of graphs, it can be somewhat hard to find the most relevant results. Perhaps some of the more obvious findings (like Figure 1(b-d) given Figure 1a) could move to the appendix to make space for more detailed task descriptions. Novelty is not really the aim of this paper since it mostly investigates existing architectures. To use the mutual information to obtain bits per parameter scores in highly non-linear parameterized functions is new to me. The paper also proposed to new architectures that seem to have practical value. The paper adds to the currently still somewhat neglected research effort to employ the larger computational resources we currently have towards a better understanding of architectures which were designed when such resources were not yet available. I’d argue that the paper is original enough for that reason alone. The paper provides some interesting new insights into the properties of RNNs. While observed before, it is interesting to see the importance of gated units for maintaining trainability of networks with many layers. It is also interesting to see a potential new use for vanilla RNNs for simpler tasks where a high capacity per parameter may be required due to hardware constraints. The proposed +RNN may turn out to have practical value as well and the hyperparameter robustness results shed some light on the popularity of certain architectures when limited time for HP tuning is available. The large body of results and hyperparameter analysis should be useful to many researchers who want to use RNNs in the future. All in all, I think this paper would make a valuable addition to the ICLR conference but would benefit from some improvements to the text. Pros: * Thorough analysis. * Seemingly proper experiments. * The way of quantifying capacity in neural networks adds to the novelty of the paper. * The results have some practical value and suggest similar analysis of other architectures. * The results provide useful insights into the relative merits of different RNN architectures. Cons: * It’s hard to isolate the most important findings (some plots seem redundant). * Some relevant experimental details are missing.",
"role": "assistant"
}
]
| [
"The authors investigate a variety of existing and two new RNN architectures to obtain more insight about the effectiveness at which these models can store task information in their parameters and activations.",
"The experimental setups look sound.",
"To generalize comparisons between different architectures it’s necessary to consider multiple tasks and control for the effect of the hyperparameters.",
"This work uses multiple tasks of varying complexities, principled hyperparameter tuning methodology and a number of tuning iterations that can currently only be achieved by the computational resources of some of the larger industrial research groups.",
"The descriptions of the models and the objective where very clear to me.",
"The descriptions of the experiments and presentation of the results were not always clear to me at times, even with the additional details in the appendix available.",
"Most of these issues can easily be resolved by editing the text.",
"For example, in the memory task the scaling of the inputs (and hence also outputs) is not provided so it’s hard to interpret the squared error scores in Figure 2c.",
"It’s not clear to me what the term ‘unrollings’ refers to in Figure 2b.",
"Is this a time lag with additional hidden state updates between the presentation of the input sequence and the generation of the output?",
"Since the perceptron capacity task is somewhat central to the paper, I think a slightly more precise description of how and when the predictions are computed would be helpful.",
"Due to the large number of graphs, it can be somewhat hard to find the most relevant results.",
"Perhaps some of the more obvious findings (like Figure 1(b-d) given Figure 1a) could move to the appendix to make space for more detailed task descriptions.",
"Novelty is not really the aim of this paper since it mostly investigates existing architectures.",
"To use the mutual information to obtain bits per parameter scores in highly non-linear parameterized functions is new to me.",
"The paper also proposed to new architectures that seem to have practical value.",
"The paper adds to the currently still somewhat neglected research effort to employ the larger computational resources we currently have towards a better understanding of architectures which were designed when such resources were not yet available.",
"I’d argue that the paper is original enough for that reason alone.",
"The paper provides some interesting new insights into the properties of RNNs.",
"While observed before, it is interesting to see the importance of gated units for maintaining trainability of networks with many layers.",
"It is also interesting to see a potential new use for vanilla RNNs for simpler tasks where a high capacity per parameter may be required due to hardware constraints.",
"The proposed +RNN may turn out to have practical value as well and the hyperparameter robustness results shed some light on the popularity of certain architectures when limited time for HP tuning is available.",
"The large body of results and hyperparameter analysis should be useful to many researchers who want to use RNNs in the future.",
"All in all, I think this paper would make a valuable addition to the ICLR conference but would benefit from some improvements to the text.",
"Pros: * Thorough analysis.",
"* Seemingly proper experiments.",
"* The way of quantifying capacity in neural networks adds to the novelty of the paper.",
"* The results have some practical value and suggest similar analysis of other architectures.",
"* The results provide useful insights into the relative merits of different RNN architectures.",
"Cons: * It’s hard to isolate the most important findings (some plots seem redundant).",
"* Some relevant experimental details are missing."
] | {
"criticism": 7,
"example": 3,
"importance_and_relevance": 13,
"materials_and_methods": 19,
"praise": 9,
"presentation_and_reporting": 8,
"results_and_discussion": 12,
"suggestion_and_solution": 7,
"total": 31
} | 0.225806 | 0.096774 | 0.419355 | 0.612903 | 0.290323 | 0.258065 | 0.387097 | 0.225806 | 31 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.516129 | -2.597378 | 5.113507 |
Deep Learning with Dynamic Computation Graphs | Neural networks that compute over graph structures are a natural fit for problems in a variety of domains, including natural language (parse trees) and cheminformatics (molecular graphs). However, since the computation graph has a different shape and size for every input, such networks do not directly support batched training or inference. They are also difficult to implement in popular deep learning libraries, which are based on static data-flow graphs. We introduce a technique called dynamic batching, which not only batches together operations between different input graphs of dissimilar shape, but also between different nodes within a single input graph. The technique allows us to create static graphs, using popular libraries, that emulate dynamic computation graphs of arbitrary shape and size. We further present a high-level library of compositional blocks that simplifies the creation of dynamic graph models. Using the library, we demonstrate concise and batch-wise parallel implementations for a variety of models from the literature. | The paper describes a novel technique to improve the efficiency of computation graphs in deep learning frameworks. An impressive speedup can be observed in their implementation within TensorFlow. The content is presented with sufficient clarity, although some more graphical illustrations could be useful. This work is relevant in order to achieve highest performance in neural network training. Pros: - significant speed improvements through dynamic batching - source code provided Cons: - the effect on a large real-world (ASR, SMT) would allow the reader to put the improvements better into context - presentation/vizualisation can be improved | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Learning with Dynamic Computation Graphs\nNeural networks that compute over graph structures are a natural fit for problems in a variety of domains, including natural language (parse trees) and cheminformatics (molecular graphs). However, since the computation graph has a different shape and size for every input, such networks do not directly support batched training or inference. They are also difficult to implement in popular deep learning libraries, which are based on static data-flow graphs. We introduce a technique called dynamic batching, which not only batches together operations between different input graphs of dissimilar shape, but also between different nodes within a single input graph. The technique allows us to create static graphs, using popular libraries, that emulate dynamic computation graphs of arbitrary shape and size. We further present a high-level library of compositional blocks that simplifies the creation of dynamic graph models. Using the library, we demonstrate concise and batch-wise parallel implementations for a variety of models from the literature.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper describes a novel technique to improve the efficiency of computation graphs in deep learning frameworks. An impressive speedup can be observed in their implementation within TensorFlow. The content is presented with sufficient clarity, although some more graphical illustrations could be useful. This work is relevant in order to achieve highest performance in neural network training. Pros: - significant speed improvements through dynamic batching - source code provided Cons: - the effect on a large real-world (ASR, SMT) would allow the reader to put the improvements better into context - presentation/vizualisation can be improved",
"role": "assistant"
}
]
| [
"The paper describes a novel technique to improve the efficiency of computation graphs in deep learning frameworks.",
"An impressive speedup can be observed in their implementation within TensorFlow.",
"The content is presented with sufficient clarity, although some more graphical illustrations could be useful.",
"This work is relevant in order to achieve highest performance in neural network training.",
"Pros: - significant speed improvements through dynamic batching - source code provided Cons: - the effect on a large real-world (ASR, SMT) would allow the reader to put the improvements better into context - presentation/vizualisation can be improved"
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 4,
"presentation_and_reporting": 2,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 5
} | 0 | 0 | 0.4 | 0.6 | 0.8 | 0.4 | 0 | 0.4 | 5 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 2.6 | 1.589924 | 1.010076 |
Query-Reduction Networks for Question Answering | In this paper, we study the problem of question answering when reasoning over multiple facts is required. We propose Query-Reduction Network (QRN), a variant of Recurrent Neural Network (RNN) that effectively handles both short-term (local) and long-term (global) sequential dependencies to reason over multiple facts. QRN considers the context sentences as a sequence of state-changing triggers, and reduces the original query to a more informed query as it observes each trigger (context sentence) through time. Our experiments show that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and in a real goal-oriented dialog dataset. In addition, QRN formulation allows parallelization on RNN*s time axis, saving an order of magnitude in time complexity for training and inference. | The paper proposed a simple and effective model for QA. The paper is easy to read and result is impressive on the synthetic and real dataset. The one question is the paper is called query-reduction, but there is no place to show this reduction explicitly. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQuery-Reduction Networks for Question Answering\nIn this paper, we study the problem of question answering when reasoning over multiple facts is required. We propose Query-Reduction Network (QRN), a variant of Recurrent Neural Network (RNN) that effectively handles both short-term (local) and long-term (global) sequential dependencies to reason over multiple facts. QRN considers the context sentences as a sequence of state-changing triggers, and reduces the original query to a more informed query as it observes each trigger (context sentence) through time. Our experiments show that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and in a real goal-oriented dialog dataset. In addition, QRN formulation allows parallelization on RNN*s time axis, saving an order of magnitude in time complexity for training and inference.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposed a simple and effective model for QA. The paper is easy to read and result is impressive on the synthetic and real dataset. The one question is the paper is called query-reduction, but there is no place to show this reduction explicitly.",
"role": "assistant"
}
]
| [
"The paper proposed a simple and effective model for QA.",
"The paper is easy to read and result is impressive on the synthetic and real dataset.",
"The one question is the paper is called query-reduction, but there is no place to show this reduction explicitly."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 2,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 3
} | 0.333333 | 0 | 0 | 0.666667 | 0.333333 | 0.666667 | 0.333333 | 0 | 3 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 2.333333 | 0.75509 | 1.578243 |
Query-Reduction Networks for Question Answering | In this paper, we study the problem of question answering when reasoning over multiple facts is required. We propose Query-Reduction Network (QRN), a variant of Recurrent Neural Network (RNN) that effectively handles both short-term (local) and long-term (global) sequential dependencies to reason over multiple facts. QRN considers the context sentences as a sequence of state-changing triggers, and reduces the original query to a more informed query as it observes each trigger (context sentence) through time. Our experiments show that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and in a real goal-oriented dialog dataset. In addition, QRN formulation allows parallelization on RNN*s time axis, saving an order of magnitude in time complexity for training and inference. | The paper proposes a simplistic recurrent architecture that has a very small temporal dependence in its definition (which, in turn, allows it to be parallelized across the temporal axis). Paper is clear to follow, the architecture has appropriate motivation and the experiments seem to be very thorough and over an appropriate range of synthetic and real datasets. Results are very impressive on its own especially considering the simplicity of the model. Activation and gate visualizations are informative. Overall, the work is sufficient as a conference contribution. I appreciate the effort by authors in responding to questions and incorporating revisions. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQuery-Reduction Networks for Question Answering\nIn this paper, we study the problem of question answering when reasoning over multiple facts is required. We propose Query-Reduction Network (QRN), a variant of Recurrent Neural Network (RNN) that effectively handles both short-term (local) and long-term (global) sequential dependencies to reason over multiple facts. QRN considers the context sentences as a sequence of state-changing triggers, and reduces the original query to a more informed query as it observes each trigger (context sentence) through time. Our experiments show that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and in a real goal-oriented dialog dataset. In addition, QRN formulation allows parallelization on RNN*s time axis, saving an order of magnitude in time complexity for training and inference.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a simplistic recurrent architecture that has a very small temporal dependence in its definition (which, in turn, allows it to be parallelized across the temporal axis). Paper is clear to follow, the architecture has appropriate motivation and the experiments seem to be very thorough and over an appropriate range of synthetic and real datasets. Results are very impressive on its own especially considering the simplicity of the model. Activation and gate visualizations are informative. Overall, the work is sufficient as a conference contribution. I appreciate the effort by authors in responding to questions and incorporating revisions.",
"role": "assistant"
}
]
| [
"The paper proposes a simplistic recurrent architecture that has a very small temporal dependence in its definition (which, in turn, allows it to be parallelized across the temporal axis).",
"Paper is clear to follow, the architecture has appropriate motivation and the experiments seem to be very thorough and over an appropriate range of synthetic and real datasets.",
"Results are very impressive on its own especially considering the simplicity of the model.",
"Activation and gate visualizations are informative.",
"Overall, the work is sufficient as a conference contribution.",
"I appreciate the effort by authors in responding to questions and incorporating revisions."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 5,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 6
} | 0 | 0 | 0.333333 | 0.666667 | 0.833333 | 0.166667 | 0.166667 | 0 | 6 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.166667 | 1.393328 | 0.773339 |
Query-Reduction Networks for Question Answering | In this paper, we study the problem of question answering when reasoning over multiple facts is required. We propose Query-Reduction Network (QRN), a variant of Recurrent Neural Network (RNN) that effectively handles both short-term (local) and long-term (global) sequential dependencies to reason over multiple facts. QRN considers the context sentences as a sequence of state-changing triggers, and reduces the original query to a more informed query as it observes each trigger (context sentence) through time. Our experiments show that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and in a real goal-oriented dialog dataset. In addition, QRN formulation allows parallelization on RNN*s time axis, saving an order of magnitude in time complexity for training and inference. | This work introduces a new RNN model and shows how it can be used for QA in bAbI-like settings. I think the model is simple and interesting, and the results are strong. I would like the authors to also show results training on all tasks simultaneously, but other than that, I think it is an interesting model and it is nicely described and tested. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQuery-Reduction Networks for Question Answering\nIn this paper, we study the problem of question answering when reasoning over multiple facts is required. We propose Query-Reduction Network (QRN), a variant of Recurrent Neural Network (RNN) that effectively handles both short-term (local) and long-term (global) sequential dependencies to reason over multiple facts. QRN considers the context sentences as a sequence of state-changing triggers, and reduces the original query to a more informed query as it observes each trigger (context sentence) through time. Our experiments show that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and in a real goal-oriented dialog dataset. In addition, QRN formulation allows parallelization on RNN*s time axis, saving an order of magnitude in time complexity for training and inference.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work introduces a new RNN model and shows how it can be used for QA in bAbI-like settings. I think the model is simple and interesting, and the results are strong. I would like the authors to also show results training on all tasks simultaneously, but other than that, I think it is an interesting model and it is nicely described and tested.",
"role": "assistant"
}
]
| [
"This work introduces a new RNN model and shows how it can be used for QA in bAbI-like settings.",
"I think the model is simple and interesting, and the results are strong.",
"I would like the authors to also show results training on all tasks simultaneously, but other than that, I think it is an interesting model and it is nicely described and tested."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 3
} | 0 | 0 | 0.666667 | 1 | 0.666667 | 0.333333 | 0.666667 | 0.333333 | 3 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3.666667 | 2.088424 | 1.578243 |
Incorporating long-range consistency in CNN-based texture generation | Gatys et al. (2015) showed that pair-wise products of features in a convolutional network are a very effective representation of image textures. We propose a simple modification to that representation which makes it possible to incorporate long-range structure into image generation, and to render images that satisfy various symmetry constraints. We show how this can greatly improve rendering of regular textures and of images that contain other kinds of symmetric structure. We also present applications to inpainting and season transfer. | The paper investigates a simple extension of Gatys et al. CNN-based texture descriptors for image generation. Similar to Gatys et al., the method uses as texture descriptor the empirical intra-channel correlation matrix of the CNN feature response at some layer of a deep network. Differently from Gatys et al., longer range correlations are measured by introducing a shift between the correlated feature responses, which translates in a simple modification of the original architecture. The idea is simple but has interesting effects on the generated textures and can be extended to transformations other than translation. While longer range correlations could be accounted for by considering the response of deeper CNN features in the original method by Gatys et al., the authors show that modelling them explicitly using shallower features is more effective, which is reasonable. An important limitation that this work shares with most of its peers is the lack of a principled quantitative evaluation protocol, such that judging the effectiveness of the approach remains almost entirely a qualitative affair. While this should not be considered a significant drawback of the paper due to the objective difficulty of solving this open issue, nevertheless it is somewhat limiting that no principled evaluation method could be devised and implemented. The authors suggest that, as future work, a possible evaluation method could be based on a classification task -- this is a potentially interesting approach that merits some further investigation. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIncorporating long-range consistency in CNN-based texture generation\nGatys et al. (2015) showed that pair-wise products of features in a convolutional network are a very effective representation of image textures. We propose a simple modification to that representation which makes it possible to incorporate long-range structure into image generation, and to render images that satisfy various symmetry constraints. We show how this can greatly improve rendering of regular textures and of images that contain other kinds of symmetric structure. We also present applications to inpainting and season transfer.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper investigates a simple extension of Gatys et al. CNN-based texture descriptors for image generation. Similar to Gatys et al., the method uses as texture descriptor the empirical intra-channel correlation matrix of the CNN feature response at some layer of a deep network. Differently from Gatys et al., longer range correlations are measured by introducing a shift between the correlated feature responses, which translates in a simple modification of the original architecture. The idea is simple but has interesting effects on the generated textures and can be extended to transformations other than translation. While longer range correlations could be accounted for by considering the response of deeper CNN features in the original method by Gatys et al., the authors show that modelling them explicitly using shallower features is more effective, which is reasonable. An important limitation that this work shares with most of its peers is the lack of a principled quantitative evaluation protocol, such that judging the effectiveness of the approach remains almost entirely a qualitative affair. While this should not be considered a significant drawback of the paper due to the objective difficulty of solving this open issue, nevertheless it is somewhat limiting that no principled evaluation method could be devised and implemented. The authors suggest that, as future work, a possible evaluation method could be based on a classification task -- this is a potentially interesting approach that merits some further investigation.",
"role": "assistant"
}
]
| [
"The paper investigates a simple extension of Gatys et al.",
"CNN-based texture descriptors for image generation.",
"Similar to Gatys et al., the method uses as texture descriptor the empirical intra-channel correlation matrix of the CNN feature response at some layer of a deep network.",
"Differently from Gatys et al., longer range correlations are measured by introducing a shift between the correlated feature responses, which translates in a simple modification of the original architecture.",
"The idea is simple but has interesting effects on the generated textures and can be extended to transformations other than translation.",
"While longer range correlations could be accounted for by considering the response of deeper CNN features in the original method by Gatys et al., the authors show that modelling them explicitly using shallower features is more effective, which is reasonable.",
"An important limitation that this work shares with most of its peers is the lack of a principled quantitative evaluation protocol, such that judging the effectiveness of the approach remains almost entirely a qualitative affair.",
"While this should not be considered a significant drawback of the paper due to the objective difficulty of solving this open issue, nevertheless it is somewhat limiting that no principled evaluation method could be devised and implemented.",
"The authors suggest that, as future work, a possible evaluation method could be based on a classification task -- this is a potentially interesting approach that merits some further investigation."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 9
} | 0.222222 | 0 | 0.444444 | 0.666667 | 0.222222 | 0.111111 | 0.444444 | 0.333333 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.444444 | 2.191926 | 0.252519 |
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation task that generates one audio sample at a time. We show that our model which profits from combining memory-less modules, namely autoregressive multilayer perceptron, and stateful recurrent neural networks in a hierarchical structure is de facto powerful to capture the underlying sources of variations in temporal domain for very long time on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance. | The paper proposed a novel SampleRNN to directly model waveform signals and achieved better performance both in terms of objective test NLL and subjective A/B tests. As mentioned in the discussions, the current status of the paper lack plenty of details in describing their model. Hopefully, this will be addressed in the final version. The authors attempted to compare with wavenet model, but they didn*t manage to get a model better than the baseline LSTM-RNN, which makes all the comparisons to wavenets less convincing. Hence, instead of wasting time and space comparing to wavenet, detailing the proposed model would be better. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSampleRNN: An Unconditional End-to-End Neural Audio Generation Model\nIn this paper we propose a novel model for unconditional audio generation task that generates one audio sample at a time. We show that our model which profits from combining memory-less modules, namely autoregressive multilayer perceptron, and stateful recurrent neural networks in a hierarchical structure is de facto powerful to capture the underlying sources of variations in temporal domain for very long time on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposed a novel SampleRNN to directly model waveform signals and achieved better performance both in terms of objective test NLL and subjective A/B tests. As mentioned in the discussions, the current status of the paper lack plenty of details in describing their model. Hopefully, this will be addressed in the final version. The authors attempted to compare with wavenet model, but they didn*t manage to get a model better than the baseline LSTM-RNN, which makes all the comparisons to wavenets less convincing. Hence, instead of wasting time and space comparing to wavenet, detailing the proposed model would be better.",
"role": "assistant"
}
]
| [
"The paper proposed a novel SampleRNN to directly model waveform signals and achieved better performance both in terms of objective test NLL and subjective A/B tests.",
"As mentioned in the discussions, the current status of the paper lack plenty of details in describing their model.",
"Hopefully, this will be addressed in the final version.",
"The authors attempted to compare with wavenet model, but they didn*t manage to get a model better than the baseline LSTM-RNN, which makes all the comparisons to wavenets less convincing.",
"Hence, instead of wasting time and space comparing to wavenet, detailing the proposed model would be better."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 0,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 5
} | 0.4 | 0 | 0 | 0.8 | 0 | 0.6 | 0.2 | 0.4 | 5 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 2.4 | 1.389924 | 1.010076 |
Metacontrol for Adaptive Imagination-Based Optimization | Many machine learning systems are built to solve the hardest examples of a particular task, which often makes them large and expensive to run---especially with respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this *one-size-fits-all* approach may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to optimize a sequence of *imagined* internal simulations over predictive models of the world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call *experts*) can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller, and experts were trained with *interaction networks* (Battaglia et al., 2016) as expert models, our approach was able to solve a challenging decision-making problem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and individual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful framework for using rich forward models for efficient model-based reinforcement learning. | A well written paper and an interesting construction - I thoroughly enjoyed reading it. I found the formalism a bit hard to follow without specific examples- that is, it wasn*t clear to me at first what the specific components in figure 1A were. What constitutes the controller, a control, the optimizer, what was being optimized, etc., in specific cases. Algorithm boxes may have been helpful, especially in the case of your experiments. A description of existing models that fall under your conceptual framework might help as well. In Practical Bayesian Optimization of Machine Learning Algorithms, Snoek, Larochelle and Adams propose optimizing with respect to expected improvement per second to balance computation cost and performance loss. It might be interesting to see how this falls into your framework. Experimental results were presented clearly and well illustrated the usefulness of the metacontroller. I*m curious to see the results of using more metaexperts. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMetacontrol for Adaptive Imagination-Based Optimization\nMany machine learning systems are built to solve the hardest examples of a particular task, which often makes them large and expensive to run---especially with respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this *one-size-fits-all* approach may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to optimize a sequence of *imagined* internal simulations over predictive models of the world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call *experts*) can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller, and experts were trained with *interaction networks* (Battaglia et al., 2016) as expert models, our approach was able to solve a challenging decision-making problem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and individual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful framework for using rich forward models for efficient model-based reinforcement learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nA well written paper and an interesting construction - I thoroughly enjoyed reading it. I found the formalism a bit hard to follow without specific examples- that is, it wasn*t clear to me at first what the specific components in figure 1A were. What constitutes the controller, a control, the optimizer, what was being optimized, etc., in specific cases. Algorithm boxes may have been helpful, especially in the case of your experiments. A description of existing models that fall under your conceptual framework might help as well. In Practical Bayesian Optimization of Machine Learning Algorithms, Snoek, Larochelle and Adams propose optimizing with respect to expected improvement per second to balance computation cost and performance loss. It might be interesting to see how this falls into your framework. Experimental results were presented clearly and well illustrated the usefulness of the metacontroller. I*m curious to see the results of using more metaexperts.",
"role": "assistant"
}
]
| [
"A well written paper and an interesting construction - I thoroughly enjoyed reading it.",
"I found the formalism a bit hard to follow without specific examples- that is, it wasn*t clear to me at first what the specific components in figure 1A were.",
"What constitutes the controller, a control, the optimizer, what was being optimized, etc., in specific cases.",
"Algorithm boxes may have been helpful, especially in the case of your experiments.",
"A description of existing models that fall under your conceptual framework might help as well.",
"In Practical Bayesian Optimization of Machine Learning Algorithms, Snoek, Larochelle and Adams propose optimizing with respect to expected improvement per second to balance computation cost and performance loss.",
"It might be interesting to see how this falls into your framework.",
"Experimental results were presented clearly and well illustrated the usefulness of the metacontroller.",
"I*m curious to see the results of using more metaexperts."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 4,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 9
} | 0.111111 | 0 | 0.333333 | 0.666667 | 0.333333 | 0.444444 | 0.333333 | 0.333333 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.555556 | 2.303037 | 0.252519 |
Metacontrol for Adaptive Imagination-Based Optimization | Many machine learning systems are built to solve the hardest examples of a particular task, which often makes them large and expensive to run---especially with respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this *one-size-fits-all* approach may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to optimize a sequence of *imagined* internal simulations over predictive models of the world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call *experts*) can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller, and experts were trained with *interaction networks* (Battaglia et al., 2016) as expert models, our approach was able to solve a challenging decision-making problem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and individual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful framework for using rich forward models for efficient model-based reinforcement learning. | This paper introduces an approach to reinforcement learning and control wherein, rather than training a single controller to perform a task, a metacontroller with access to a base-level controller and a number of accessory « experts » is utilized. The job of the metacontroller is to decide how many times to call the controller and the experts, and which expert to invoke at which iteration. (The controller is a bit special in that in addition to being provided the current state, it is given a summary of the history of previous calls to itself and previous experts.) The sequence of controls and expert advice is embedded into a fixed-size vector through an LSTM. The method is tested on an N-body control task, where it is shown that there are benefits to multiple iterations (« pondering ») even for simple experts, and that the metacontroller can deliver accuracy and computational cost benefits over fixed-iteration controls. The paper is in general well written, and reasonably easy to follow. As the authors note, the topic of metareasoning has been studied to some extent in AI, but its use as a differentiable and fully trainable component within an RL system appears new. At this stage, it is difficult to evaluate the impact of this kind of approach: the overall model architecture is intriguing and probably merits publication, but whether and how this will scale to other domains remains the subject of future work. The experimental validation is interesting and well carried out, but remains of limited scope. Moreover, given such a complex architecture, there should be a discussion of the training difficulties and convergence issues, if any. Here are a few specific comments, questions and suggestions: 1) in Figure 1A, the meaning of the graphical language should be explained. For instance, there are arrows of different thickness and line style — do these mean different things? 2) in Figure 3, the caption should better explain the contents of the figure. For example, what do the colours of the different lines refer to? Also, in the top row, there are dots and error bars that are given, but this is explained only in the « bottom row » part. This makes understanding this figure difficult. 3) in Figure 4, the shaded area represents a 95% confidence interval on the regression line; in addition, it would be helpful to give a standard error on the regression slope (to verify that it excludes zero, i.e. the slope is significant), as well as a fraction of explained variance (R^2). 4) in Figure 5, the fraction of samples using the MLP expert does not appear to decrease monotonically with the increasing cost of the MLP expert (i.e. the bottom left part of the right plot, with a few red-shaded boxes). Why is that? Is there lots of variance in these fractions from experiment to experiment? 5) the supplementary materials are very helpful. Thank you for all these details. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMetacontrol for Adaptive Imagination-Based Optimization\nMany machine learning systems are built to solve the hardest examples of a particular task, which often makes them large and expensive to run---especially with respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this *one-size-fits-all* approach may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to optimize a sequence of *imagined* internal simulations over predictive models of the world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call *experts*) can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller, and experts were trained with *interaction networks* (Battaglia et al., 2016) as expert models, our approach was able to solve a challenging decision-making problem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and individual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful framework for using rich forward models for efficient model-based reinforcement learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces an approach to reinforcement learning and control wherein, rather than training a single controller to perform a task, a metacontroller with access to a base-level controller and a number of accessory « experts » is utilized. The job of the metacontroller is to decide how many times to call the controller and the experts, and which expert to invoke at which iteration. (The controller is a bit special in that in addition to being provided the current state, it is given a summary of the history of previous calls to itself and previous experts.) The sequence of controls and expert advice is embedded into a fixed-size vector through an LSTM. The method is tested on an N-body control task, where it is shown that there are benefits to multiple iterations (« pondering ») even for simple experts, and that the metacontroller can deliver accuracy and computational cost benefits over fixed-iteration controls. The paper is in general well written, and reasonably easy to follow. As the authors note, the topic of metareasoning has been studied to some extent in AI, but its use as a differentiable and fully trainable component within an RL system appears new. At this stage, it is difficult to evaluate the impact of this kind of approach: the overall model architecture is intriguing and probably merits publication, but whether and how this will scale to other domains remains the subject of future work. The experimental validation is interesting and well carried out, but remains of limited scope. Moreover, given such a complex architecture, there should be a discussion of the training difficulties and convergence issues, if any. Here are a few specific comments, questions and suggestions: 1) in Figure 1A, the meaning of the graphical language should be explained. For instance, there are arrows of different thickness and line style — do these mean different things? 2) in Figure 3, the caption should better explain the contents of the figure. For example, what do the colours of the different lines refer to? Also, in the top row, there are dots and error bars that are given, but this is explained only in the « bottom row » part. This makes understanding this figure difficult. 3) in Figure 4, the shaded area represents a 95% confidence interval on the regression line; in addition, it would be helpful to give a standard error on the regression slope (to verify that it excludes zero, i.e. the slope is significant), as well as a fraction of explained variance (R^2). 4) in Figure 5, the fraction of samples using the MLP expert does not appear to decrease monotonically with the increasing cost of the MLP expert (i.e. the bottom left part of the right plot, with a few red-shaded boxes). Why is that? Is there lots of variance in these fractions from experiment to experiment? 5) the supplementary materials are very helpful. Thank you for all these details.",
"role": "assistant"
}
]
| [
"This paper introduces an approach to reinforcement learning and control wherein, rather than training a single controller to perform a task, a metacontroller with access to a base-level controller and a number of accessory « experts » is utilized.",
"The job of the metacontroller is to decide how many times to call the controller and the experts, and which expert to invoke at which iteration.",
"(The controller is a bit special in that in addition to being provided the current state, it is given a summary of the history of previous calls to itself and previous experts.)",
"The sequence of controls and expert advice is embedded into a fixed-size vector through an LSTM.",
"The method is tested on an N-body control task, where it is shown that there are benefits to multiple iterations (« pondering ») even for simple experts, and that the metacontroller can deliver accuracy and computational cost benefits over fixed-iteration controls.",
"The paper is in general well written, and reasonably easy to follow.",
"As the authors note, the topic of metareasoning has been studied to some extent in AI, but its use as a differentiable and fully trainable component within an RL system appears new.",
"At this stage, it is difficult to evaluate the impact of this kind of approach: the overall model architecture is intriguing and probably merits publication, but whether and how this will scale to other domains remains the subject of future work.",
"The experimental validation is interesting and well carried out, but remains of limited scope.",
"Moreover, given such a complex architecture, there should be a discussion of the training difficulties and convergence issues, if any.",
"Here are a few specific comments, questions and suggestions: 1) in Figure 1A, the meaning of the graphical language should be explained.",
"For instance, there are arrows of different thickness and line style — do these mean different things?",
"2) in Figure 3, the caption should better explain the contents of the figure.",
"For example, what do the colours of the different lines refer to?",
"Also, in the top row, there are dots and error bars that are given, but this is explained only in the « bottom row » part.",
"This makes understanding this figure difficult.",
"3) in Figure 4, the shaded area represents a 95% confidence interval on the regression line; in addition, it would be helpful to give a standard error on the regression slope (to verify that it excludes zero, i.e.",
"the slope is significant), as well as a fraction of explained variance (R^2).",
"4) in Figure 5, the fraction of samples using the MLP expert does not appear to decrease monotonically with the increasing cost of the MLP expert (i.e.",
"the bottom left part of the right plot, with a few red-shaded boxes).",
"Why is that?",
"Is there lots of variance in these fractions from experiment to experiment?",
"5) the supplementary materials are very helpful.",
"Thank you for all these details."
] | {
"criticism": 4,
"example": 8,
"importance_and_relevance": 5,
"materials_and_methods": 13,
"praise": 4,
"presentation_and_reporting": 9,
"results_and_discussion": 4,
"suggestion_and_solution": 5,
"total": 24
} | 0.166667 | 0.333333 | 0.208333 | 0.541667 | 0.166667 | 0.375 | 0.166667 | 0.208333 | 24 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.166667 | 0.256993 | 1.909674 |
Metacontrol for Adaptive Imagination-Based Optimization | Many machine learning systems are built to solve the hardest examples of a particular task, which often makes them large and expensive to run---especially with respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this *one-size-fits-all* approach may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to optimize a sequence of *imagined* internal simulations over predictive models of the world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call *experts*) can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller, and experts were trained with *interaction networks* (Battaglia et al., 2016) as expert models, our approach was able to solve a challenging decision-making problem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and individual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful framework for using rich forward models for efficient model-based reinforcement learning. | Thank you for an interesting read on an approach to choose computational models based on kind of examples given. Pros - As an idea, using a meta controller to decide the computational model and the number of steps to reach the conclusion is keeping in line with solving an important practical issue of increased computational times of a simple example. - The approach seems similar to an ensemble learning construct. But instead of random experts and a fixed computational complexity during testing time the architecture is designed to estimate hyper-parameters like number of ponder steps which gives this approach a distinct advantage. Cons - Even though the metacontroller is designed to choose the best amongst the given experts, its complete capability has not been explored yet. It would be interesting to see the architecture handle more than 2 experts. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMetacontrol for Adaptive Imagination-Based Optimization\nMany machine learning systems are built to solve the hardest examples of a particular task, which often makes them large and expensive to run---especially with respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this *one-size-fits-all* approach may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to optimize a sequence of *imagined* internal simulations over predictive models of the world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call *experts*) can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller, and experts were trained with *interaction networks* (Battaglia et al., 2016) as expert models, our approach was able to solve a challenging decision-making problem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and individual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful framework for using rich forward models for efficient model-based reinforcement learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThank you for an interesting read on an approach to choose computational models based on kind of examples given. Pros - As an idea, using a meta controller to decide the computational model and the number of steps to reach the conclusion is keeping in line with solving an important practical issue of increased computational times of a simple example. - The approach seems similar to an ensemble learning construct. But instead of random experts and a fixed computational complexity during testing time the architecture is designed to estimate hyper-parameters like number of ponder steps which gives this approach a distinct advantage. Cons - Even though the metacontroller is designed to choose the best amongst the given experts, its complete capability has not been explored yet. It would be interesting to see the architecture handle more than 2 experts.",
"role": "assistant"
}
]
| [
"Thank you for an interesting read on an approach to choose computational models based on kind of examples given.",
"Pros - As an idea, using a meta controller to decide the computational model and the number of steps to reach the conclusion is keeping in line with solving an important practical issue of increased computational times of a simple example.",
"- The approach seems similar to an ensemble learning construct.",
"But instead of random experts and a fixed computational complexity during testing time the architecture is designed to estimate hyper-parameters like number of ponder steps which gives this approach a distinct advantage.",
"Cons - Even though the metacontroller is designed to choose the best amongst the given experts, its complete capability has not been explored yet.",
"It would be interesting to see the architecture handle more than 2 experts."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 6
} | 0.166667 | 0 | 0.333333 | 1 | 0.166667 | 0 | 0.333333 | 0.166667 | 6 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.166667 | 1.393328 | 0.773339 |
Lossy Image Compression with Compressive Autoencoders | We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images. | This work proposes a new approach for image compression using auto encoders. The results are impressive, besting the state of the art in this field. Pros: + Very clear paper. It should be possible to replicate these results should one be inclined to do so. + The results, when compared to other work in this field are very promising. I need to emphasize, and I think the authors should have emphasized this fact as well: this is very new technology and it should not be surprising it*s not better than the state of the art in image compression. It*s definitely better than other neural network approaches to compression, though. Cons: - The training procedure seems clunky. It requires multiple training stages, freezing weights, etc. - The motivation behind Figure 1 is a bit strange, as it*s not clear what it*s trying to illustrate, and may confuse readers (it talks about effects on JPEG, but the paper discusses a neural network architecture, not DCT quantization) | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLossy Image Compression with Compressive Autoencoders\nWe propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work proposes a new approach for image compression using auto encoders. The results are impressive, besting the state of the art in this field. Pros: + Very clear paper. It should be possible to replicate these results should one be inclined to do so. + The results, when compared to other work in this field are very promising. I need to emphasize, and I think the authors should have emphasized this fact as well: this is very new technology and it should not be surprising it*s not better than the state of the art in image compression. It*s definitely better than other neural network approaches to compression, though. Cons: - The training procedure seems clunky. It requires multiple training stages, freezing weights, etc. - The motivation behind Figure 1 is a bit strange, as it*s not clear what it*s trying to illustrate, and may confuse readers (it talks about effects on JPEG, but the paper discusses a neural network architecture, not DCT quantization)",
"role": "assistant"
}
]
| [
"This work proposes a new approach for image compression using auto encoders.",
"The results are impressive, besting the state of the art in this field.",
"Pros: + Very clear paper.",
"It should be possible to replicate these results should one be inclined to do so.",
"+ The results, when compared to other work in this field are very promising.",
"I need to emphasize, and I think the authors should have emphasized this fact as well: this is very new technology and it should not be surprising it*s not better than the state of the art in image compression.",
"It*s definitely better than other neural network approaches to compression, though.",
"Cons: - The training procedure seems clunky.",
"It requires multiple training stages, freezing weights, etc.",
"- The motivation behind Figure 1 is a bit strange, as it*s not clear what it*s trying to illustrate, and may confuse readers (it talks about effects on JPEG, but the paper discusses a neural network architecture, not DCT quantization)"
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 4,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 10
} | 0.3 | 0.1 | 0.4 | 0.4 | 0.2 | 0.3 | 0.3 | 0.2 | 10 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 2.057958 | 0.142042 |
A recurrent neural network without chaos | We introduce an exceptionally simple gated recurrent neural network (RNN) that achieves performance comparable to well-known gated architectures, such as LSTMs and GRUs, on the word-level language modeling task. We prove that our model has simple, predicable and non-chaotic dynamics. This stands in stark contrast to more standard gated architectures, whose underlying dynamical systems exhibit chaotic behavior. | I think the authors provide an interesting direction for understanding and maybe constructing recurrent models that are easier to interpret. Is not clear where such direction will lead but I think it could be an interesting starting point for future work, one that worth exploring. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA recurrent neural network without chaos\nWe introduce an exceptionally simple gated recurrent neural network (RNN) that achieves performance comparable to well-known gated architectures, such as LSTMs and GRUs, on the word-level language modeling task. We prove that our model has simple, predicable and non-chaotic dynamics. This stands in stark contrast to more standard gated architectures, whose underlying dynamical systems exhibit chaotic behavior.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI think the authors provide an interesting direction for understanding and maybe constructing recurrent models that are easier to interpret. Is not clear where such direction will lead but I think it could be an interesting starting point for future work, one that worth exploring.",
"role": "assistant"
}
]
| [
"I think the authors provide an interesting direction for understanding and maybe constructing recurrent models that are easier to interpret.",
"Is not clear where such direction will lead but I think it could be an interesting starting point for future work, one that worth exploring."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 1,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 2
} | 0.5 | 0 | 1 | 0.5 | 0.5 | 0 | 0 | 0.5 | 2 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 3 | 1.090326 | 1.909674 |
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost. | Paper Strengths: -- Elegant use of MoE for expanding model capacity and enabling training large models necessary for exploiting very large datasets in a computationally feasible manner -- The effective batch size for training the MoE drastically increased also -- Interesting experimental results on the effects of increasing the number of MoEs, which is expected. Paper Weaknesses: --- there are many different ways of increasing model capacity to enable the exploitation of very large datasets; it would be very nice to discuss the use of MoE and other alternatives in terms of computational efficiency and other factors. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOutrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer\nThe capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nPaper Strengths: -- Elegant use of MoE for expanding model capacity and enabling training large models necessary for exploiting very large datasets in a computationally feasible manner -- The effective batch size for training the MoE drastically increased also -- Interesting experimental results on the effects of increasing the number of MoEs, which is expected. Paper Weaknesses: --- there are many different ways of increasing model capacity to enable the exploitation of very large datasets; it would be very nice to discuss the use of MoE and other alternatives in terms of computational efficiency and other factors.",
"role": "assistant"
}
]
| [
"Paper Strengths: -- Elegant use of MoE for expanding model capacity and enabling training large models necessary for exploiting very large datasets in a computationally feasible manner -- The effective batch size for training the MoE drastically increased also -- Interesting experimental results on the effects of increasing the number of MoEs, which is expected.",
"Paper Weaknesses: --- there are many different ways of increasing model capacity to enable the exploitation of very large datasets; it would be very nice to discuss the use of MoE and other alternatives in terms of computational efficiency and other factors."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 2,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 2
} | 0 | 0 | 0 | 1 | 0.5 | 0 | 0.5 | 0.5 | 2 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 2.5 | 0.590326 | 1.909674 |
Tree-structured decoding with doubly-recurrent neural networks | We propose a neural network architecture for generating tree-structured objects from encoded representations. The core of the method is a doubly-recurrent neural network that models separately the width and depth recurrences across the tree, and combines them inside each cell to generate an output. The topology of the tree is explicitly modeled, allowing the network to predict both content and topology of the tree when decoding. That is, given only an encoded vector representation, the network is able to simultaneously generate a tree from it and predict labels for the nodes. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector, and then generate a tree structure from it. The experimental results show the effectiveness of this architecture at recovering latent tree structure in sequences and at mapping sentences to simple functional programs. | This paper proposes a variant of a recurrent neural network that has two orthogonal temporal dimensions that can be used as a decoder to generate tree structures (including the topology) in an encoder-decoder setting. The architecture is well motivated and I can see several applications (in addition to what*s presented in the paper) that need to generate tree structures given an unstructured data. One weakness of the paper is the limitation of experiments. IFTTT dataset seems to be an interesting appropriate application, and there is also a synthetic dataset, however it would be more interesting to see more natural language applications with syntactic tree structures. Still, I consider the experiments sufficient as a first step to showcase a novel architecture. A strength is that the authors experiment with different design decisions when building the topology predictor components of the architecture, about when / how to decide to terminate, as opposed to making a single arbitrary choice. I see future applications of this architecture and it seems to have interesting directions for future work so I suggest its acceptance as a conference contribution. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTree-structured decoding with doubly-recurrent neural networks\nWe propose a neural network architecture for generating tree-structured objects from encoded representations. The core of the method is a doubly-recurrent neural network that models separately the width and depth recurrences across the tree, and combines them inside each cell to generate an output. The topology of the tree is explicitly modeled, allowing the network to predict both content and topology of the tree when decoding. That is, given only an encoded vector representation, the network is able to simultaneously generate a tree from it and predict labels for the nodes. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector, and then generate a tree structure from it. The experimental results show the effectiveness of this architecture at recovering latent tree structure in sequences and at mapping sentences to simple functional programs.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a variant of a recurrent neural network that has two orthogonal temporal dimensions that can be used as a decoder to generate tree structures (including the topology) in an encoder-decoder setting. The architecture is well motivated and I can see several applications (in addition to what*s presented in the paper) that need to generate tree structures given an unstructured data. One weakness of the paper is the limitation of experiments. IFTTT dataset seems to be an interesting appropriate application, and there is also a synthetic dataset, however it would be more interesting to see more natural language applications with syntactic tree structures. Still, I consider the experiments sufficient as a first step to showcase a novel architecture. A strength is that the authors experiment with different design decisions when building the topology predictor components of the architecture, about when / how to decide to terminate, as opposed to making a single arbitrary choice. I see future applications of this architecture and it seems to have interesting directions for future work so I suggest its acceptance as a conference contribution.",
"role": "assistant"
}
]
| [
"This paper proposes a variant of a recurrent neural network that has two orthogonal temporal dimensions that can be used as a decoder to generate tree structures (including the topology) in an encoder-decoder setting.",
"The architecture is well motivated and I can see several applications (in addition to what*s presented in the paper) that need to generate tree structures given an unstructured data.",
"One weakness of the paper is the limitation of experiments.",
"IFTTT dataset seems to be an interesting appropriate application, and there is also a synthetic dataset, however it would be more interesting to see more natural language applications with syntactic tree structures.",
"Still, I consider the experiments sufficient as a first step to showcase a novel architecture.",
"A strength is that the authors experiment with different design decisions when building the topology predictor components of the architecture, about when / how to decide to terminate, as opposed to making a single arbitrary choice.",
"I see future applications of this architecture and it seems to have interesting directions for future work so I suggest its acceptance as a conference contribution."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 7
} | 0.142857 | 0 | 0.571429 | 0.857143 | 0.428571 | 0.142857 | 0.142857 | 0.428571 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.714286 | 2.146118 | 0.568167 |
Introspection:Accelerating Neural Network Training By Learning Weight Evolution | Neural Networks are function approximators that have achieved state-of-the-art accuracy in numerous machine learning tasks. In spite of their great success in terms of accuracy, their large training time makes it difficult to use them for various tasks. In this paper, we explore the idea of learning weight evolution pattern from a simple network for accelerating training of novel neural networks. We use a neural network to learn the training pattern from MNIST classification and utilize it to accelerate training of neural networks used for CIFAR-10 and ImageNet classification. Our method has a low memory footprint and is computationally efficient. This method can also be used with other optimizers to give faster convergence. The results indicate a general trend in the weight evolution during training of neural networks. | The paper reads well and the idea is new. Sadly, many details needed for replicating the results (such as layer sizes of the CNNs, learning rates) are missing. The training of the introspection network could have been described in more detail. Also, I think that a model, which is closer to the current state-of-the-art should have been used in the ImageNet experiments. That would have made the results more convincing. Due to the novelty of the idea, I recommend the paper. I would increase the rating if an updated draft addresses the mentioned issues. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIntrospection:Accelerating Neural Network Training By Learning Weight Evolution\nNeural Networks are function approximators that have achieved state-of-the-art accuracy in numerous machine learning tasks. In spite of their great success in terms of accuracy, their large training time makes it difficult to use them for various tasks. In this paper, we explore the idea of learning weight evolution pattern from a simple network for accelerating training of novel neural networks. We use a neural network to learn the training pattern from MNIST classification and utilize it to accelerate training of neural networks used for CIFAR-10 and ImageNet classification. Our method has a low memory footprint and is computationally efficient. This method can also be used with other optimizers to give faster convergence. The results indicate a general trend in the weight evolution during training of neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper reads well and the idea is new. Sadly, many details needed for replicating the results (such as layer sizes of the CNNs, learning rates) are missing. The training of the introspection network could have been described in more detail. Also, I think that a model, which is closer to the current state-of-the-art should have been used in the ImageNet experiments. That would have made the results more convincing. Due to the novelty of the idea, I recommend the paper. I would increase the rating if an updated draft addresses the mentioned issues.",
"role": "assistant"
}
]
| [
"The paper reads well and the idea is new.",
"Sadly, many details needed for replicating the results (such as layer sizes of the CNNs, learning rates) are missing.",
"The training of the introspection network could have been described in more detail.",
"Also, I think that a model, which is closer to the current state-of-the-art should have been used in the ImageNet experiments.",
"That would have made the results more convincing.",
"Due to the novelty of the idea, I recommend the paper.",
"I would increase the rating if an updated draft addresses the mentioned issues."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 5,
"total": 7
} | 0.142857 | 0 | 0.142857 | 0.428571 | 0.285714 | 0.142857 | 0.285714 | 0.714286 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
Lie-Access Neural Turing Machines | External neural memory structures have recently become a popular tool for algorithmic deep learning (Graves et al. 2014; Weston et al. 2014). These models generally utilize differentiable versions of traditional discrete memory-access structures (random access, stacks, tapes) to provide the storage necessary for computational tasks. In this work, we argue that these neural memory systems lack specific structure important for relative indexing, and propose an alternative model, Lie-access memory, that is explicitly designed for the neural setting. In this paradigm, memory is accessed using a continuous head in a key-space manifold. The head is moved via Lie group actions, such as shifts or rotations, generated by a controller, and memory access is performed by linear smoothing in key space. We argue that Lie groups provide a natural generalization of discrete memory structures, such as Turing machines, as they provide inverse and identity operators while maintaining differentiability. To experiment with this approach, we implement a simplified Lie-access neural Turing machine (LANTM) with different Lie groups. We find that this approach is able to perform well on a range of algorithmic tasks. | The paper introduces a novel memory mechanism for NTMs based on differentiable Lie groups. This allows to place memory elements as points on a manifold, while still allowing training with backpropagation. It*s a more general version of the NTM memory, and possibly allows for training a more efficient addressing schemes. Pros: - novel and interesting idea for memory access - nicely written Cons: - need to manually specify the Lie group to use (it would be better if network could learn the best way of accessing memory) - not clear if this really works better than standard NTM (compared only to simplified version) - not clear if this is useful in practice (no comparison on real tasks) | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLie-Access Neural Turing Machines\nExternal neural memory structures have recently become a popular tool for algorithmic deep learning (Graves et al. 2014; Weston et al. 2014). These models generally utilize differentiable versions of traditional discrete memory-access structures (random access, stacks, tapes) to provide the storage necessary for computational tasks. In this work, we argue that these neural memory systems lack specific structure important for relative indexing, and propose an alternative model, Lie-access memory, that is explicitly designed for the neural setting. In this paradigm, memory is accessed using a continuous head in a key-space manifold. The head is moved via Lie group actions, such as shifts or rotations, generated by a controller, and memory access is performed by linear smoothing in key space. We argue that Lie groups provide a natural generalization of discrete memory structures, such as Turing machines, as they provide inverse and identity operators while maintaining differentiability. To experiment with this approach, we implement a simplified Lie-access neural Turing machine (LANTM) with different Lie groups. We find that this approach is able to perform well on a range of algorithmic tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper introduces a novel memory mechanism for NTMs based on differentiable Lie groups. This allows to place memory elements as points on a manifold, while still allowing training with backpropagation. It*s a more general version of the NTM memory, and possibly allows for training a more efficient addressing schemes. Pros: - novel and interesting idea for memory access - nicely written Cons: - need to manually specify the Lie group to use (it would be better if network could learn the best way of accessing memory) - not clear if this really works better than standard NTM (compared only to simplified version) - not clear if this is useful in practice (no comparison on real tasks)",
"role": "assistant"
}
]
| [
"The paper introduces a novel memory mechanism for NTMs based on differentiable Lie groups.",
"This allows to place memory elements as points on a manifold, while still allowing training with backpropagation.",
"It*s a more general version of the NTM memory, and possibly allows for training a more efficient addressing schemes.",
"Pros: - novel and interesting idea for memory access - nicely written Cons: - need to manually specify the Lie group to use (it would be better if network could learn the best way of accessing memory) - not clear if this really works better than standard NTM (compared only to simplified version) - not clear if this is useful in practice (no comparison on real tasks)"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 4
} | 0.25 | 0 | 0 | 1 | 0.25 | 0.5 | 0 | 0.5 | 4 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 2.5 | 1.221623 | 1.278377 |
Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep’s computation on the previous timestep’s output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks. | This paper points out that you can take an LSTM and make the gates only a function of the last few inputs - h_t = f(x_t, x_{t-1}, ...x_{t-T}) - instead of the standard - h_t = f(x_t, h_{t-1}) -, and that if you do so the networks can run faster and work better. You*re moving compute from a serial stream to a parallel stream and also making the serial stream more parallel. Unfortunately, this simple, effective and interesting concept is somewhat obscured by confusing language. - I would encourage the authors to improve the explanation of the model. - Another improvement might be to explicitly go over some of the big Oh calculations, or give an example of exactly where the speed improvements are coming from. - Otherwise the experiments seem adequate and I enjoyed this paper. This could be a high value contribution and become a standard neural network component if it can be replicated and if it turns out to work reliably in multiple settings. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQuasi-Recurrent Neural Networks\nRecurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep’s computation on the previous timestep’s output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper points out that you can take an LSTM and make the gates only a function of the last few inputs - h_t = f(x_t, x_{t-1}, ...x_{t-T}) - instead of the standard - h_t = f(x_t, h_{t-1}) -, and that if you do so the networks can run faster and work better. You*re moving compute from a serial stream to a parallel stream and also making the serial stream more parallel. Unfortunately, this simple, effective and interesting concept is somewhat obscured by confusing language. - I would encourage the authors to improve the explanation of the model. - Another improvement might be to explicitly go over some of the big Oh calculations, or give an example of exactly where the speed improvements are coming from. - Otherwise the experiments seem adequate and I enjoyed this paper. This could be a high value contribution and become a standard neural network component if it can be replicated and if it turns out to work reliably in multiple settings.",
"role": "assistant"
}
]
| [
"This paper points out that you can take an LSTM and make the gates only a function of the last few inputs - h_t = f(x_t, x_{t-1}, ...x_{t-T}) - instead of the standard - h_t = f(x_t, h_{t-1}) -, and that if you do so the networks can run faster and work better.",
"You*re moving compute from a serial stream to a parallel stream and also making the serial stream more parallel.",
"Unfortunately, this simple, effective and interesting concept is somewhat obscured by confusing language.",
"- I would encourage the authors to improve the explanation of the model.",
"- Another improvement might be to explicitly go over some of the big Oh calculations, or give an example of exactly where the speed improvements are coming from.",
"- Otherwise the experiments seem adequate and I enjoyed this paper.",
"This could be a high value contribution and become a standard neural network component if it can be replicated and if it turns out to work reliably in multiple settings."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 4,
"total": 7
} | 0.142857 | 0 | 0.285714 | 0.857143 | 0.285714 | 0.285714 | 0.142857 | 0.571429 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.571429 | 2.003261 | 0.568167 |
Multi-view Recurrent Neural Acoustic Word Embeddings | Recent work has begun exploring neural acoustic word embeddings–fixed dimensional vector representations of arbitrary-length speech segments corresponding to words. Such embeddings are applicable to speech retrieval and recognition tasks, where reasoning about whole words may make it possible to avoid ambiguous sub-word representations. The main idea is to map acoustic sequences to fixed-dimensional vectors such that examples of the same word are mapped to similar vectors, while different-word examples are mapped to very different vectors. In this work we take a multi-view approach to learning acoustic word embeddings, in which we jointly learn to embed acoustic sequences and their corresponding character sequences. We use deep bidirectional LSTM embedding models and multi-view contrastive losses. We study the effect of different loss variants, including fixed-margin and cost-sensitive losses. Our acoustic word embeddings improve over previous approaches for the task of word discrimination. We also present results on other tasks that are enabled by the multi-view approach, including cross-view word discrimination and word similarity. | This paper proposes an approach to learning word vector representations for character sequences and acoustic spans jointly. The paper is clearly written and both the approach and experiments seem reasonable in terms of execution. The motivation and tasks feel a bit synthetic as it requires acoustics spans for words that have already been segmented from continuous speech - - a major assumption. The evaluation tasks feel a bit synthetic overall and in particular when evaluating character based comparisons it seems there should also be phoneme based comparisons. There*s a lot of discussion of character edit distance relative to acoustic span similarity. It seems very natural to also include phoneme string edit distance in this discussion and experiments. This is especially true of the word similarity test. Rather than only looking at levenshtein edit distance of characters you should evaluate edit distance of the phone strings relative to the acoustic embedding distances. Beyond the evaluation task the paper would be more interesting if you compared character embeddings with phone string embeddings. I believe the last function could remain identical it*s just swapping out characters for phones as the symbol set. finally in this topic the discussion and experiments should look at homophones As if not obvious what the network would learn to handle these. the vocabulary size and training data amount make this really a toy problem. although there are many pairs constructed most of those pairs will be easy distinctions. the experiments and conclusions would be far stronger with a larger vocabulary and word segment data set with subsampling all pairs perhaps biased towards more difficult or similar pairs. it seems this approach is unable to address the task of keyword spotting in longer spoken utterances. If that*s the case please add some discussion as to why you are solving the problem of word embeddings given existing word segmentations. The motivating example of using this approach to retrieve words seems flawed if a recognizer must be used to segment words beforehand | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMulti-view Recurrent Neural Acoustic Word Embeddings\nRecent work has begun exploring neural acoustic word embeddings–fixed dimensional vector representations of arbitrary-length speech segments corresponding to words. Such embeddings are applicable to speech retrieval and recognition tasks, where reasoning about whole words may make it possible to avoid ambiguous sub-word representations. The main idea is to map acoustic sequences to fixed-dimensional vectors such that examples of the same word are mapped to similar vectors, while different-word examples are mapped to very different vectors. In this work we take a multi-view approach to learning acoustic word embeddings, in which we jointly learn to embed acoustic sequences and their corresponding character sequences. We use deep bidirectional LSTM embedding models and multi-view contrastive losses. We study the effect of different loss variants, including fixed-margin and cost-sensitive losses. Our acoustic word embeddings improve over previous approaches for the task of word discrimination. We also present results on other tasks that are enabled by the multi-view approach, including cross-view word discrimination and word similarity.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes an approach to learning word vector representations for character sequences and acoustic spans jointly. The paper is clearly written and both the approach and experiments seem reasonable in terms of execution. The motivation and tasks feel a bit synthetic as it requires acoustics spans for words that have already been segmented from continuous speech - - a major assumption. The evaluation tasks feel a bit synthetic overall and in particular when evaluating character based comparisons it seems there should also be phoneme based comparisons. There*s a lot of discussion of character edit distance relative to acoustic span similarity. It seems very natural to also include phoneme string edit distance in this discussion and experiments. This is especially true of the word similarity test. Rather than only looking at levenshtein edit distance of characters you should evaluate edit distance of the phone strings relative to the acoustic embedding distances. Beyond the evaluation task the paper would be more interesting if you compared character embeddings with phone string embeddings. I believe the last function could remain identical it*s just swapping out characters for phones as the symbol set. finally in this topic the discussion and experiments should look at homophones As if not obvious what the network would learn to handle these. the vocabulary size and training data amount make this really a toy problem. although there are many pairs constructed most of those pairs will be easy distinctions. the experiments and conclusions would be far stronger with a larger vocabulary and word segment data set with subsampling all pairs perhaps biased towards more difficult or similar pairs. it seems this approach is unable to address the task of keyword spotting in longer spoken utterances. If that*s the case please add some discussion as to why you are solving the problem of word embeddings given existing word segmentations. The motivating example of using this approach to retrieve words seems flawed if a recognizer must be used to segment words beforehand",
"role": "assistant"
}
]
| [
"This paper proposes an approach to learning word vector representations for character sequences and acoustic spans jointly.",
"The paper is clearly written and both the approach and experiments seem reasonable in terms of execution.",
"The motivation and tasks feel a bit synthetic as it requires acoustics spans for words that have already been segmented from continuous speech - - a major assumption.",
"The evaluation tasks feel a bit synthetic overall and in particular when evaluating character based comparisons it seems there should also be phoneme based comparisons.",
"There*s a lot of discussion of character edit distance relative to acoustic span similarity.",
"It seems very natural to also include phoneme string edit distance in this discussion and experiments.",
"This is especially true of the word similarity test.",
"Rather than only looking at levenshtein edit distance of characters you should evaluate edit distance of the phone strings relative to the acoustic embedding distances.",
"Beyond the evaluation task the paper would be more interesting if you compared character embeddings with phone string embeddings.",
"I believe the last function could remain identical it*s just swapping out characters for phones as the symbol set.",
"finally in this topic the discussion and experiments should look at homophones As if not obvious what the network would learn to handle these.",
"the vocabulary size and training data amount make this really a toy problem.",
"although there are many pairs constructed most of those pairs will be easy distinctions.",
"the experiments and conclusions would be far stronger with a larger vocabulary and word segment data set with subsampling all pairs perhaps biased towards more difficult or similar pairs.",
"it seems this approach is unable to address the task of keyword spotting in longer spoken utterances.",
"If that*s the case please add some discussion as to why you are solving the problem of word embeddings given existing word segmentations.",
"The motivating example of using this approach to retrieve words seems flawed if a recognizer must be used to segment words beforehand"
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 9,
"praise": 1,
"presentation_and_reporting": 14,
"results_and_discussion": 5,
"suggestion_and_solution": 5,
"total": 17
} | 0.117647 | 0 | 0.058824 | 0.529412 | 0.058824 | 0.823529 | 0.294118 | 0.294118 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.176471 | 1.923952 | 0.252519 |
Learning Features of Music From Scratch | This paper introduces a new large-scale music dataset, MusicNet, to serve as a source of supervision and evaluation of machine learning methods for music research. MusicNet consists of hundreds of freely-licensed classical music recordings by 10 composers, written for 11 instruments, together with instrument/note annotations resulting in over 1 million temporal labels on 34 hours of chamber music performances under various studio and microphone conditions. The paper defines a multi-label classification task to predict notes in musical recordings, along with an evaluation protocol, and benchmarks several machine learning architectures for this task: i) learning from spectrogram features; ii) end-to-end learning with a neural net; iii) end-to-end learning with a convolutional neural net. These experiments show that end-to-end models trained for note prediction learn frequency selective filters as a low-level representation of audio. | This paper describes the creation of a corpus of freely-licensed classical music recordings along with corresponding MIDI-scores aligned to the audio. It also describes experiments in polyphonic transcription using various deep learning approaches, which show promising results. The paper is a little disorganised and somewhat contradictory in parts. For example, I find the first sentence in section 2 (MusicNet) would better be pushed one paragraph below so that the section be allowed to begin with a survey of the tools available to researchers in music. Also, the description for Table 3 should probably appear somewhere in the Methods section. Last example: the abstract/intro says the purpose is note prediction; later (4th paragraph of intro) there*s a claim that the focus is *learning low-level features of music....* I find this slightly disorienting. Although others (Uehara et al., 2016, for example) have discussed collection platforms and corpora, this work is interesting because of its size and the approach for generating features. I*m interested in what the authors will to do expand the offerings in the corpus, both in terms of volume and diversity. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Features of Music From Scratch\nThis paper introduces a new large-scale music dataset, MusicNet, to serve as a source of supervision and evaluation of machine learning methods for music research. MusicNet consists of hundreds of freely-licensed classical music recordings by 10 composers, written for 11 instruments, together with instrument/note annotations resulting in over 1 million temporal labels on 34 hours of chamber music performances under various studio and microphone conditions. The paper defines a multi-label classification task to predict notes in musical recordings, along with an evaluation protocol, and benchmarks several machine learning architectures for this task: i) learning from spectrogram features; ii) end-to-end learning with a neural net; iii) end-to-end learning with a convolutional neural net. These experiments show that end-to-end models trained for note prediction learn frequency selective filters as a low-level representation of audio.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper describes the creation of a corpus of freely-licensed classical music recordings along with corresponding MIDI-scores aligned to the audio. It also describes experiments in polyphonic transcription using various deep learning approaches, which show promising results. The paper is a little disorganised and somewhat contradictory in parts. For example, I find the first sentence in section 2 (MusicNet) would better be pushed one paragraph below so that the section be allowed to begin with a survey of the tools available to researchers in music. Also, the description for Table 3 should probably appear somewhere in the Methods section. Last example: the abstract/intro says the purpose is note prediction; later (4th paragraph of intro) there*s a claim that the focus is *learning low-level features of music....* I find this slightly disorienting. Although others (Uehara et al., 2016, for example) have discussed collection platforms and corpora, this work is interesting because of its size and the approach for generating features. I*m interested in what the authors will to do expand the offerings in the corpus, both in terms of volume and diversity.",
"role": "assistant"
}
]
| [
"This paper describes the creation of a corpus of freely-licensed classical music recordings along with corresponding MIDI-scores aligned to the audio.",
"It also describes experiments in polyphonic transcription using various deep learning approaches, which show promising results.",
"The paper is a little disorganised and somewhat contradictory in parts.",
"For example, I find the first sentence in section 2 (MusicNet) would better be pushed one paragraph below so that the section be allowed to begin with a survey of the tools available to researchers in music.",
"Also, the description for Table 3 should probably appear somewhere in the Methods section.",
"Last example: the abstract/intro says the purpose is note prediction; later (4th paragraph of intro) there*s a claim that the focus is *learning low-level features of music....* I find this slightly disorienting.",
"Although others (Uehara et al., 2016, for example) have discussed collection platforms and corpora, this work is interesting because of its size and the approach for generating features.",
"I*m interested in what the authors will to do expand the offerings in the corpus, both in terms of volume and diversity."
] | {
"criticism": 2,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 8
} | 0.25 | 0.25 | 0.25 | 0.5 | 0.25 | 0.25 | 0.125 | 0.25 | 8 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks. | The paper address the problem of detecting if an example is misclassified or out-of-distribution. This is an very important topic and the study provides a good baseline. Although it misses strong novel methods for the task, the study contributes to the community. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks\nWe consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper address the problem of detecting if an example is misclassified or out-of-distribution. This is an very important topic and the study provides a good baseline. Although it misses strong novel methods for the task, the study contributes to the community.",
"role": "assistant"
}
]
| [
"The paper address the problem of detecting if an example is misclassified or out-of-distribution.",
"This is an very important topic and the study provides a good baseline.",
"Although it misses strong novel methods for the task, the study contributes to the community."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 3
} | 0.333333 | 0 | 0.666667 | 1 | 0.666667 | 0.333333 | 0 | 0 | 3 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 3 | 1.421757 | 1.578243 |
Sigma Delta Quantized Networks | Deep neural networks can be obscenely wasteful. When processing video, a convolutional network expends a fixed amount of computation for each frame with no regard to the similarity between neighbouring frames. As a result, it ends up repeatedly doing very similar computations. To put an end to such waste, we introduce Sigma-Delta networks. With each new input, each layer in this network sends a discretized form of its change in activation to the next layer. Thus the amount of computation that the network does scales with the amount of change in the input and layer activations, rather than the size of the network. We introduce an optimization method for converting any pre-trained deep network into an optimally efficient Sigma-Delta network, and show that our algorithm, if run on the appropriate hardware, could cut at least an order of magnitude from the computational cost of processing video data. | This paper presented a method of improving the efficiency of deep networks acting on a sequence of correlated inputs, by only performing the computations required to capture changes between adjacent inputs. The paper was clearly written, the approach is clever, and it*s neat to see a practical algorithm driven by what is essentially a spiking network. The benefits of this approach are still more theoretical than practical -- it seems unlikely to be worthwhile to do this on current hardware. I strongly suspect that if deep networks were trained with an appropriate sparse slowness penalty, the reduction in computation would be much larger. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSigma Delta Quantized Networks\nDeep neural networks can be obscenely wasteful. When processing video, a convolutional network expends a fixed amount of computation for each frame with no regard to the similarity between neighbouring frames. As a result, it ends up repeatedly doing very similar computations. To put an end to such waste, we introduce Sigma-Delta networks. With each new input, each layer in this network sends a discretized form of its change in activation to the next layer. Thus the amount of computation that the network does scales with the amount of change in the input and layer activations, rather than the size of the network. We introduce an optimization method for converting any pre-trained deep network into an optimally efficient Sigma-Delta network, and show that our algorithm, if run on the appropriate hardware, could cut at least an order of magnitude from the computational cost of processing video data.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presented a method of improving the efficiency of deep networks acting on a sequence of correlated inputs, by only performing the computations required to capture changes between adjacent inputs. The paper was clearly written, the approach is clever, and it*s neat to see a practical algorithm driven by what is essentially a spiking network. The benefits of this approach are still more theoretical than practical -- it seems unlikely to be worthwhile to do this on current hardware. I strongly suspect that if deep networks were trained with an appropriate sparse slowness penalty, the reduction in computation would be much larger.",
"role": "assistant"
}
]
| [
"This paper presented a method of improving the efficiency of deep networks acting on a sequence of correlated inputs, by only performing the computations required to capture changes between adjacent inputs.",
"The paper was clearly written, the approach is clever, and it*s neat to see a practical algorithm driven by what is essentially a spiking network.",
"The benefits of this approach are still more theoretical than practical -- it seems unlikely to be worthwhile to do this on current hardware.",
"I strongly suspect that if deep networks were trained with an appropriate sparse slowness penalty, the reduction in computation would be much larger."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 4
} | 0.25 | 0 | 0.5 | 1 | 0.25 | 0.25 | 0.5 | 0 | 4 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.75 | 1.471623 | 1.278377 |
A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING | This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks. | This paper proposes a method for representing sentences as a 2d matrix by utilizing a self-attentive mechanism on the hidden states of a bi-directional LSTM encoder. This work differs from prior work mainly in the 2d structure of embedding, which the authors use to produce heat-map visualizations of input sentences and to generate good performance on several downstream tasks. There is a substantial amount of prior work which the authors do not appropriately address, some of which is listed in previous comments. The main novelty of this work is in the 2d structure of embeddings, and as such, I would have liked to see this structure investigated in much more depth. Specifically, a couple important relevant experiments would have been: * How do the performance and visualizations change as the number of attention vectors (r) varies? * For a fixed parameter budget, how important is using multiple attention vectors versus, say, using a larger hidden state or embedding size? I would recommend changing some of the presentation in the penalization term section. Specifically, the statement that *the best way to evaluate the diversity is definitely the Kullback Leibler divergence between any 2 of the summation weight vectors* runs somewhat counter to the authors* comments about this topic below. In Fig. (2), I did not find the visualizations to provide particularly compelling evidence that the multiple attention vectors were doing much of interest beyond a single attention vector, even with penalization. To me this seems like a necessary component to support the main claims of this paper. Overall, while I found the architecture interesting, I am not convinced that the model*s main innovation -- the 2d structure of the embedding matrix -- is actually doing anything important or meaningful beyond what is being accomplished by similar attentive embedding models already present in the literature. Further experiments demonstrating this effect would be necessary for me to give this paper my full endorsement. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING\nThis paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a method for representing sentences as a 2d matrix by utilizing a self-attentive mechanism on the hidden states of a bi-directional LSTM encoder. This work differs from prior work mainly in the 2d structure of embedding, which the authors use to produce heat-map visualizations of input sentences and to generate good performance on several downstream tasks. There is a substantial amount of prior work which the authors do not appropriately address, some of which is listed in previous comments. The main novelty of this work is in the 2d structure of embeddings, and as such, I would have liked to see this structure investigated in much more depth. Specifically, a couple important relevant experiments would have been: * How do the performance and visualizations change as the number of attention vectors (r) varies? * For a fixed parameter budget, how important is using multiple attention vectors versus, say, using a larger hidden state or embedding size? I would recommend changing some of the presentation in the penalization term section. Specifically, the statement that *the best way to evaluate the diversity is definitely the Kullback Leibler divergence between any 2 of the summation weight vectors* runs somewhat counter to the authors* comments about this topic below. In Fig. (2), I did not find the visualizations to provide particularly compelling evidence that the multiple attention vectors were doing much of interest beyond a single attention vector, even with penalization. To me this seems like a necessary component to support the main claims of this paper. Overall, while I found the architecture interesting, I am not convinced that the model*s main innovation -- the 2d structure of the embedding matrix -- is actually doing anything important or meaningful beyond what is being accomplished by similar attentive embedding models already present in the literature. Further experiments demonstrating this effect would be necessary for me to give this paper my full endorsement.",
"role": "assistant"
}
]
| [
"This paper proposes a method for representing sentences as a 2d matrix by utilizing a self-attentive mechanism on the hidden states of a bi-directional LSTM encoder.",
"This work differs from prior work mainly in the 2d structure of embedding, which the authors use to produce heat-map visualizations of input sentences and to generate good performance on several downstream tasks.",
"There is a substantial amount of prior work which the authors do not appropriately address, some of which is listed in previous comments.",
"The main novelty of this work is in the 2d structure of embeddings, and as such, I would have liked to see this structure investigated in much more depth.",
"Specifically, a couple important relevant experiments would have been: * How do the performance and visualizations change as the number of attention vectors (r) varies?",
"* For a fixed parameter budget, how important is using multiple attention vectors versus, say, using a larger hidden state or embedding size?",
"I would recommend changing some of the presentation in the penalization term section.",
"Specifically, the statement that *the best way to evaluate the diversity is definitely the Kullback Leibler divergence between any 2 of the summation weight vectors* runs somewhat counter to the authors* comments about this topic below.",
"In Fig.",
"(2), I did not find the visualizations to provide particularly compelling evidence that the multiple attention vectors were doing much of interest beyond a single attention vector, even with penalization.",
"To me this seems like a necessary component to support the main claims of this paper.",
"Overall, while I found the architecture interesting, I am not convinced that the model*s main innovation -- the 2d structure of the embedding matrix -- is actually doing anything important or meaningful beyond what is being accomplished by similar attentive embedding models already present in the literature.",
"Further experiments demonstrating this effect would be necessary for me to give this paper my full endorsement."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 4,
"suggestion_and_solution": 4,
"total": 13
} | 0.307692 | 0.076923 | 0.230769 | 0.538462 | 0.153846 | 0.230769 | 0.307692 | 0.307692 | 13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.153846 | 2.153846 | 0 |
A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING | This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks. | This paper introduces a sentence encoding model (for use within larger text understanding models) that can extract a matrix-valued sentence representation by way of within-sentence attention. The new model lends itself to (slightly) more informative visualizations than could be gotten otherwise, and beats reasonable baselines on three datasets. The paper is reasonably clear, I see no major technical issues, and the proposed model is novel and effective. It could plausibly be relevant to sequence modeling tasks beyond NLP. I recommend acceptance. There is one fairly serious writing issue that I*d like to see fixed, though: The abstract, introduction, and related work sections are all heavily skewed towards unsupervised learning. The paper doesn*t appear to be doing unsupervised learning, and the ideas are no more nor less suited to unsupervised learning than any other mainstream ideas in the sentence encoding literature. Details: - You should be clearer about how you expect these embeddings to be used, since that will be of certain interest to anyone attempting to use the results of this work. In particular, how you should convert the matrix representation into a vector for downstream tasks that require one. Some of the content of your reply to my comment could be reasonably added to the paper. - A graphical representation of the structure of the model would be helpful. - The LSTMN (Cheng et al., EMNLP *16) is similar enough to this work that an explicit comparison would be helpful. Again, incorporating your reply to my comment into the paper would be more than adequate. - Jiwei Li et al. (Visualizing and Understanding Neural Models in NLP, NAACL *15) present an alternative way of visualizing the influence of words on sentence encodings without using cross-sentence attention. A brief explicit comparison would be nice here. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING\nThis paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a sentence encoding model (for use within larger text understanding models) that can extract a matrix-valued sentence representation by way of within-sentence attention. The new model lends itself to (slightly) more informative visualizations than could be gotten otherwise, and beats reasonable baselines on three datasets. The paper is reasonably clear, I see no major technical issues, and the proposed model is novel and effective. It could plausibly be relevant to sequence modeling tasks beyond NLP. I recommend acceptance. There is one fairly serious writing issue that I*d like to see fixed, though: The abstract, introduction, and related work sections are all heavily skewed towards unsupervised learning. The paper doesn*t appear to be doing unsupervised learning, and the ideas are no more nor less suited to unsupervised learning than any other mainstream ideas in the sentence encoding literature. Details: - You should be clearer about how you expect these embeddings to be used, since that will be of certain interest to anyone attempting to use the results of this work. In particular, how you should convert the matrix representation into a vector for downstream tasks that require one. Some of the content of your reply to my comment could be reasonably added to the paper. - A graphical representation of the structure of the model would be helpful. - The LSTMN (Cheng et al., EMNLP *16) is similar enough to this work that an explicit comparison would be helpful. Again, incorporating your reply to my comment into the paper would be more than adequate. - Jiwei Li et al. (Visualizing and Understanding Neural Models in NLP, NAACL *15) present an alternative way of visualizing the influence of words on sentence encodings without using cross-sentence attention. A brief explicit comparison would be nice here.",
"role": "assistant"
}
]
| [
"This paper introduces a sentence encoding model (for use within larger text understanding models) that can extract a matrix-valued sentence representation by way of within-sentence attention.",
"The new model lends itself to (slightly) more informative visualizations than could be gotten otherwise, and beats reasonable baselines on three datasets.",
"The paper is reasonably clear, I see no major technical issues, and the proposed model is novel and effective.",
"It could plausibly be relevant to sequence modeling tasks beyond NLP.",
"I recommend acceptance.",
"There is one fairly serious writing issue that I*d like to see fixed, though: The abstract, introduction, and related work sections are all heavily skewed towards unsupervised learning.",
"The paper doesn*t appear to be doing unsupervised learning, and the ideas are no more nor less suited to unsupervised learning than any other mainstream ideas in the sentence encoding literature.",
"Details: - You should be clearer about how you expect these embeddings to be used, since that will be of certain interest to anyone attempting to use the results of this work.",
"In particular, how you should convert the matrix representation into a vector for downstream tasks that require one.",
"Some of the content of your reply to my comment could be reasonably added to the paper.",
"- A graphical representation of the structure of the model would be helpful.",
"- The LSTMN (Cheng et al., EMNLP *16) is similar enough to this work that an explicit comparison would be helpful.",
"Again, incorporating your reply to my comment into the paper would be more than adequate.",
"- Jiwei Li et al.",
"(Visualizing and Understanding Neural Models in NLP, NAACL *15) present an alternative way of visualizing the influence of words on sentence encodings without using cross-sentence attention.",
"A brief explicit comparison would be nice here."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 7,
"praise": 3,
"presentation_and_reporting": 10,
"results_and_discussion": 3,
"suggestion_and_solution": 8,
"total": 16
} | 0.125 | 0.0625 | 0.1875 | 0.4375 | 0.1875 | 0.625 | 0.1875 | 0.5 | 16 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.3125 | 2.170458 | 0.142042 |
Regularizing CNNs with Locally Constrained Decorrelations | Regularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN. | The author proposed a simple but yet effective technique in order to regularized neural networks. The results obtained are quite good and the technique shows to be effective when it it applied even on state of the art topologies, that is welcome because some regularization techniques used to be applied in easy task or on a initial configuration which results are still far from the best known results. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRegularizing CNNs with Locally Constrained Decorrelations\nRegularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe author proposed a simple but yet effective technique in order to regularized neural networks. The results obtained are quite good and the technique shows to be effective when it it applied even on state of the art topologies, that is welcome because some regularization techniques used to be applied in easy task or on a initial configuration which results are still far from the best known results.",
"role": "assistant"
}
]
| [
"The author proposed a simple but yet effective technique in order to regularized neural networks.",
"The results obtained are quite good and the technique shows to be effective when it it applied even on state of the art topologies, that is welcome because some regularization techniques used to be applied in easy task or on a initial configuration which results are still far from the best known results."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 2
} | 0 | 0 | 0 | 1 | 1 | 0 | 0.5 | 0 | 2 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 2.5 | 0.590326 | 1.909674 |
Regularizing CNNs with Locally Constrained Decorrelations | Regularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN. | Encouraging orthogonality in weight features has been reported useful for deep networks in many previous works. The authors present a explicit regularization cost to achieve de-correlation among weight features in a layer and encourage orthogonality. Further, they also show why and how negative correlations can and should be avoided for better de-correlation. Orthogonal weight features achieve better generalization in case of large number of trainable parameters and less training data, which usually results in over-fitting. As also mentioned by the authors biases help in de-correlation of feature responses even in the presence of correlated features (weights). Regularization techniques like OrthoReg can be more helpful in training deeper and leaner networks, where the representational capacity of each layer is low, and also generalize better. Although the improvement in performances is not significant the direction of research and the observations made are promising. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRegularizing CNNs with Locally Constrained Decorrelations\nRegularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nEncouraging orthogonality in weight features has been reported useful for deep networks in many previous works. The authors present a explicit regularization cost to achieve de-correlation among weight features in a layer and encourage orthogonality. Further, they also show why and how negative correlations can and should be avoided for better de-correlation. Orthogonal weight features achieve better generalization in case of large number of trainable parameters and less training data, which usually results in over-fitting. As also mentioned by the authors biases help in de-correlation of feature responses even in the presence of correlated features (weights). Regularization techniques like OrthoReg can be more helpful in training deeper and leaner networks, where the representational capacity of each layer is low, and also generalize better. Although the improvement in performances is not significant the direction of research and the observations made are promising.",
"role": "assistant"
}
]
| [
"Encouraging orthogonality in weight features has been reported useful for deep networks in many previous works.",
"The authors present a explicit regularization cost to achieve de-correlation among weight features in a layer and encourage orthogonality.",
"Further, they also show why and how negative correlations can and should be avoided for better de-correlation.",
"Orthogonal weight features achieve better generalization in case of large number of trainable parameters and less training data, which usually results in over-fitting.",
"As also mentioned by the authors biases help in de-correlation of feature responses even in the presence of correlated features (weights).",
"Regularization techniques like OrthoReg can be more helpful in training deeper and leaner networks, where the representational capacity of each layer is low, and also generalize better.",
"Although the improvement in performances is not significant the direction of research and the observations made are promising."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 7
} | 0 | 0 | 0.285714 | 0.857143 | 0.285714 | 0 | 0.428571 | 0.285714 | 7 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator*s objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator. | The paper introduces a technique for stabilizing the training of Generative Adversrial Networks by unrolling the inner (discriminator) optimization in the GAN loss function several steps and optimizing the generator with respect to the final state of this optimization process. The experimental evidence that this actually helps is very compelling: the 2d example shows a toy problem where this technique helps substantially, the LSTM MNIST generator example shows that the procedure helps with stabilizing the training of an unusual architecture of generator, and the image generation experiment, while not being definitive, is very convincing. For future work it would be interesting to see whether a method with smaller memory requirements could be devised based on similar principles. I strongly recommend to accept this paper. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnrolled Generative Adversarial Networks\nWe introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator*s objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper introduces a technique for stabilizing the training of Generative Adversrial Networks by unrolling the inner (discriminator) optimization in the GAN loss function several steps and optimizing the generator with respect to the final state of this optimization process. The experimental evidence that this actually helps is very compelling: the 2d example shows a toy problem where this technique helps substantially, the LSTM MNIST generator example shows that the procedure helps with stabilizing the training of an unusual architecture of generator, and the image generation experiment, while not being definitive, is very convincing. For future work it would be interesting to see whether a method with smaller memory requirements could be devised based on similar principles. I strongly recommend to accept this paper.",
"role": "assistant"
}
]
| [
"The paper introduces a technique for stabilizing the training of Generative Adversrial Networks by unrolling the inner (discriminator) optimization in the GAN loss function several steps and optimizing the generator with respect to the final state of this optimization process.",
"The experimental evidence that this actually helps is very compelling: the 2d example shows a toy problem where this technique helps substantially, the LSTM MNIST generator example shows that the procedure helps with stabilizing the training of an unusual architecture of generator, and the image generation experiment, while not being definitive, is very convincing.",
"For future work it would be interesting to see whether a method with smaller memory requirements could be devised based on similar principles.",
"I strongly recommend to accept this paper."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 4
} | 0.25 | 0.25 | 0.25 | 0.75 | 0.5 | 0 | 0 | 0.5 | 4 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 2.5 | 1.221623 | 1.278377 |
TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency | In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence – both semantic and syntactic – but might face difficulty remembering long-range dependencies. Intuitively, these long-range dependencies are of semantic nature. In contrast, latent topic models are able to capture the global underlying semantic structure of a document but do not account for word ordering. The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics. Unlike previous work on contextual RNN language modeling, our model is learned end-to-end. Empirical results on word prediction show that TopicRNN outperforms existing contextual RNN baselines. In addition, TopicRNN can be used as an unsupervised feature extractor for documents. We do this for sentiment analysis on the IMDB movie review dataset and report an error rate of 6.28%. This is comparable to the state-of-the-art 5.91% resulting from a semi-supervised approach. Finally, TopicRNN also yields sensible topics, making it a useful alternative to document models such as latent Dirichlet allocation. | This paper introduces a model that blends ideas from generative topic models with those from recurrent neural network language models. The authors evaluate the proposed approach on a document level classification benchmark as well as a language modeling benchmark and it seems to work well. There is also some analysis as to topics learned by the model and its ability to generate text. Overall the paper is clearly written and with the code promised by the authors others should be able to re-implement the approach. I have 2 potentially major questions I would ask the authors to address: 1 - LDA topic models make an exchangability (bag of words) assumption. The discussion of the generative story for TopicRNN should explicitly discuss whether this assumption is also made. On the surface it appears it is since y_t is sampled using only the document topic vector and h_t but we know that in practice h_t comes from a recurrent model that observes y_t-1. Not clear how this clean exposition of the generative model relates to what is actually done. In the Generating sequential text section it’s clear the topic model can’t generate words without using y_1 - t-1 but this seems inconsistent with the generative model specification. This needs to be shown in the paper and made clear to have a complete paper. 2 - The topic model only allows for linear interactions of the topic vector theta. It seems like this might be required to keep the generative model tractable but seems like a very poor assumption. We would expect the topic representation to have rich interactions with a language model to create nonlinear adjustments to word probabilities for a document. Please add discussion as to why this modeling choice exists and if possible how future work could modify that assumption (or explain why it’s not such a bad assumption as one might imagine) Figure 2 colors very difficult to distinguish. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency\nIn this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence – both semantic and syntactic – but might face difficulty remembering long-range dependencies. Intuitively, these long-range dependencies are of semantic nature. In contrast, latent topic models are able to capture the global underlying semantic structure of a document but do not account for word ordering. The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics. Unlike previous work on contextual RNN language modeling, our model is learned end-to-end. Empirical results on word prediction show that TopicRNN outperforms existing contextual RNN baselines. In addition, TopicRNN can be used as an unsupervised feature extractor for documents. We do this for sentiment analysis on the IMDB movie review dataset and report an error rate of 6.28%. This is comparable to the state-of-the-art 5.91% resulting from a semi-supervised approach. Finally, TopicRNN also yields sensible topics, making it a useful alternative to document models such as latent Dirichlet allocation.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a model that blends ideas from generative topic models with those from recurrent neural network language models. The authors evaluate the proposed approach on a document level classification benchmark as well as a language modeling benchmark and it seems to work well. There is also some analysis as to topics learned by the model and its ability to generate text. Overall the paper is clearly written and with the code promised by the authors others should be able to re-implement the approach. I have 2 potentially major questions I would ask the authors to address: 1 - LDA topic models make an exchangability (bag of words) assumption. The discussion of the generative story for TopicRNN should explicitly discuss whether this assumption is also made. On the surface it appears it is since y_t is sampled using only the document topic vector and h_t but we know that in practice h_t comes from a recurrent model that observes y_t-1. Not clear how this clean exposition of the generative model relates to what is actually done. In the Generating sequential text section it’s clear the topic model can’t generate words without using y_1 - t-1 but this seems inconsistent with the generative model specification. This needs to be shown in the paper and made clear to have a complete paper. 2 - The topic model only allows for linear interactions of the topic vector theta. It seems like this might be required to keep the generative model tractable but seems like a very poor assumption. We would expect the topic representation to have rich interactions with a language model to create nonlinear adjustments to word probabilities for a document. Please add discussion as to why this modeling choice exists and if possible how future work could modify that assumption (or explain why it’s not such a bad assumption as one might imagine) Figure 2 colors very difficult to distinguish.",
"role": "assistant"
}
]
| [
"This paper introduces a model that blends ideas from generative topic models with those from recurrent neural network language models.",
"The authors evaluate the proposed approach on a document level classification benchmark as well as a language modeling benchmark and it seems to work well.",
"There is also some analysis as to topics learned by the model and its ability to generate text.",
"Overall the paper is clearly written and with the code promised by the authors others should be able to re-implement the approach.",
"I have 2 potentially major questions I would ask the authors to address: 1 - LDA topic models make an exchangability (bag of words) assumption.",
"The discussion of the generative story for TopicRNN should explicitly discuss whether this assumption is also made.",
"On the surface it appears it is since y_t is sampled using only the document topic vector and h_t but we know that in practice h_t comes from a recurrent model that observes y_t-1.",
"Not clear how this clean exposition of the generative model relates to what is actually done.",
"In the Generating sequential text section it’s clear the topic model can’t generate words without using y_1 - t-1 but this seems inconsistent with the generative model specification.",
"This needs to be shown in the paper and made clear to have a complete paper.",
"2 - The topic model only allows for linear interactions of the topic vector theta.",
"It seems like this might be required to keep the generative model tractable but seems like a very poor assumption.",
"We would expect the topic representation to have rich interactions with a language model to create nonlinear adjustments to word probabilities for a document.",
"Please add discussion as to why this modeling choice exists and if possible how future work could modify that assumption (or explain why it’s not such a bad assumption as one might imagine) Figure 2 colors very difficult to distinguish."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 12,
"praise": 2,
"presentation_and_reporting": 8,
"results_and_discussion": 2,
"suggestion_and_solution": 7,
"total": 14
} | 0.285714 | 0.071429 | 0 | 0.857143 | 0.142857 | 0.571429 | 0.142857 | 0.5 | 14 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.571429 | 2.555646 | 0.015782 |
Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data | We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions via variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction. | The paper proposes to use the very standard SVGB in a sequential setting like several previous works did. However, they proposes to have a clear state space constraints similar to Linear Gaussian Models: Markovian latent space and conditional independence of observed variables given the latent variables. However the model is in this case non-linear. These assumptions are well motivated by the goal of having meaningful latent variables. The experiments are interesting but I*m still not completely convinced by the regression results in Figure 3, namely that one could obtain the angle and velocity from the state but using a function more powerful than a linear function. Also, why isn*t the model from (Watter et al., 2015) not included ? After rereading I*m not sure I understand why the coordinates should be combined in a 3x3 checkerboard as said in Figure 5a. Then paper is well motivated and the resulting model is novel enough, the bouncing ball experiment is not quite convincing, especially in prediction, as the problem is fully determined by its initial velocity and position. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data\nWe introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions via variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes to use the very standard SVGB in a sequential setting like several previous works did. However, they proposes to have a clear state space constraints similar to Linear Gaussian Models: Markovian latent space and conditional independence of observed variables given the latent variables. However the model is in this case non-linear. These assumptions are well motivated by the goal of having meaningful latent variables. The experiments are interesting but I*m still not completely convinced by the regression results in Figure 3, namely that one could obtain the angle and velocity from the state but using a function more powerful than a linear function. Also, why isn*t the model from (Watter et al., 2015) not included ? After rereading I*m not sure I understand why the coordinates should be combined in a 3x3 checkerboard as said in Figure 5a. Then paper is well motivated and the resulting model is novel enough, the bouncing ball experiment is not quite convincing, especially in prediction, as the problem is fully determined by its initial velocity and position.",
"role": "assistant"
}
]
| [
"The paper proposes to use the very standard SVGB in a sequential setting like several previous works did.",
"However, they proposes to have a clear state space constraints similar to Linear Gaussian Models: Markovian latent space and conditional independence of observed variables given the latent variables.",
"However the model is in this case non-linear.",
"These assumptions are well motivated by the goal of having meaningful latent variables.",
"The experiments are interesting but I*m still not completely convinced by the regression results in Figure 3, namely that one could obtain the angle and velocity from the state but using a function more powerful than a linear function.",
"Also, why isn*t the model from (Watter et al., 2015) not included ?",
"After rereading I*m not sure I understand why the coordinates should be combined in a 3x3 checkerboard as said in Figure 5a.",
"Then paper is well motivated and the resulting model is novel enough, the bouncing ball experiment is not quite convincing, especially in prediction, as the problem is fully determined by its initial velocity and position."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 0,
"total": 8
} | 0.5 | 0.125 | 0 | 0.75 | 0.375 | 0.125 | 0.375 | 0 | 8 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 2.25 | 1.855439 | 0.394561 |
Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification. | This paper introduces a continuous relaxation of categorical distribution, namely the the Gumbel-Softmax distribution, such that generative models with categorical random variables can be trained using reparameterization (path-derivative) gradients. The method is shown to improve upon other methods in terms of the achieved log-likelihoods of the resulting models. The main contribution, namely the method itself, is simple yet nontrivial and worth publishing, and seems effective in experiments. The paper is well-written, and I applaud the details provided in the appendix. The main application seems to be semi-supervised situations where you really want categorical variables. - P1: *differentiable sampling mechanism for softmax*. *sampling* => *approximate sampling*, since it*s technically sampling from the Gumbal-softmax. - P3: *backpropagtion* - Section 4.1: Interesting experiments. - It would be interesting to report whether there is any discrepancy between the relaxed and non-relaxed models in terms of log-likelihood. Currently, only the likelihoods under the non-relaxed models are reported. - It is slightly discouraging that the temperature (a nuisance parameter) is used differently across experiments. It would be nice to give more details on whether you were succesful in learning the temperature, instead of annealing it; it would be interesting if that hyper-parameter could be eliminated. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCategorical Reparameterization with Gumbel-Softmax\nCategorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a continuous relaxation of categorical distribution, namely the the Gumbel-Softmax distribution, such that generative models with categorical random variables can be trained using reparameterization (path-derivative) gradients. The method is shown to improve upon other methods in terms of the achieved log-likelihoods of the resulting models. The main contribution, namely the method itself, is simple yet nontrivial and worth publishing, and seems effective in experiments. The paper is well-written, and I applaud the details provided in the appendix. The main application seems to be semi-supervised situations where you really want categorical variables. - P1: *differentiable sampling mechanism for softmax*. *sampling* => *approximate sampling*, since it*s technically sampling from the Gumbal-softmax. - P3: *backpropagtion* - Section 4.1: Interesting experiments. - It would be interesting to report whether there is any discrepancy between the relaxed and non-relaxed models in terms of log-likelihood. Currently, only the likelihoods under the non-relaxed models are reported. - It is slightly discouraging that the temperature (a nuisance parameter) is used differently across experiments. It would be nice to give more details on whether you were succesful in learning the temperature, instead of annealing it; it would be interesting if that hyper-parameter could be eliminated.",
"role": "assistant"
}
]
| [
"This paper introduces a continuous relaxation of categorical distribution, namely the the Gumbel-Softmax distribution, such that generative models with categorical random variables can be trained using reparameterization (path-derivative) gradients.",
"The method is shown to improve upon other methods in terms of the achieved log-likelihoods of the resulting models.",
"The main contribution, namely the method itself, is simple yet nontrivial and worth publishing, and seems effective in experiments.",
"The paper is well-written, and I applaud the details provided in the appendix.",
"The main application seems to be semi-supervised situations where you really want categorical variables.",
"- P1: *differentiable sampling mechanism for softmax*.",
"*sampling* => *approximate sampling*, since it*s technically sampling from the Gumbal-softmax.",
"- P3: *backpropagtion* - Section 4.1: Interesting experiments.",
"- It would be interesting to report whether there is any discrepancy between the relaxed and non-relaxed models in terms of log-likelihood.",
"Currently, only the likelihoods under the non-relaxed models are reported.",
"- It is slightly discouraging that the temperature (a nuisance parameter) is used differently across experiments.",
"It would be nice to give more details on whether you were succesful in learning the temperature, instead of annealing it; it would be interesting if that hyper-parameter could be eliminated."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 12
} | 0.166667 | 0.083333 | 0.166667 | 0.916667 | 0.25 | 0.25 | 0.25 | 0.166667 | 12 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 2.234218 | 0.015782 |
Latent Sequence Decompositions | Sequence-to-sequence models rely on a fixed decomposition of the target sequences into a sequence of tokens that may be words, word-pieces or characters. The choice of these tokens and the decomposition of the target sequences into a sequence of tokens is often static, and independent of the input, output data domains. This can potentially lead to a sub-optimal choice of token dictionaries, as the decomposition is not informed by the particular problem being solved. In this paper we present Latent Sequence Decompositions (LSD), a framework in which the decomposition of sequences into constituent tokens is learnt during the training of the model. The decomposition depends both on the input sequence and on the output sequence. In LSD, during training, the model samples decompositions incrementally, from left to right by locally sampling between valid extensions. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9% WER compared to a character baseline of 14.8% WER. When combined with a convolutional network on the encoder, we achieve a WER of 9.6%. | Interesting paper which proposes jointly learning automatic segmentation of words to sub words and their acoustic models. Although the training handles the word segmentation as hidden variable which depends also on the acoustic representations, during the decoding only maximum approximation is used. The authors present nice improvements over character based results, however they did not compare results with word segmentation which does not assume the dependency on acoustic. Obviously, only text based segmentation would result in two (but simpler) independent tasks. In order to extract such segmentation several publicly open tools are available and should be cited. Some of those tools can also exploit the unigram probabilities of the words to perform their segmentations. It looks that the improvements come from the longer acoustical units - longer acoustical constraints which could lead to less confused search -, pointing towards full word models. In another way, less tokens are more probable due to less multiplication of probabilities. As a thought experiment for an extreme case: if all the possible segmentations would be possible (mixture of all word fragments, characters, and full-words), would the proposed model use word fragments at all? (WSJ is a closed vocabulary task). It would be good to show that the sub word model could outperform even a full-word model (no segmentation). Your model estimates p(z_t|x,z<t; heta), but during training both p(z_t|x,y,z<t; heta) and p(z_t|x,z<t; heta) are needed. Their connections and calculations, in general Section 2 should be more detailed. The experimental setup should be presented in a replicable way. The authors use one sample of Z during the gradient calculation. Would not it be more consistent then to use the maximum instead of sampling? Or is even getting the maximum is computationally too expensive? Considering the left-to-right approximation, this might be possible and more consistent with your decoding, please address this issue in details. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLatent Sequence Decompositions\nSequence-to-sequence models rely on a fixed decomposition of the target sequences into a sequence of tokens that may be words, word-pieces or characters. The choice of these tokens and the decomposition of the target sequences into a sequence of tokens is often static, and independent of the input, output data domains. This can potentially lead to a sub-optimal choice of token dictionaries, as the decomposition is not informed by the particular problem being solved. In this paper we present Latent Sequence Decompositions (LSD), a framework in which the decomposition of sequences into constituent tokens is learnt during the training of the model. The decomposition depends both on the input sequence and on the output sequence. In LSD, during training, the model samples decompositions incrementally, from left to right by locally sampling between valid extensions. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9% WER compared to a character baseline of 14.8% WER. When combined with a convolutional network on the encoder, we achieve a WER of 9.6%.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nInteresting paper which proposes jointly learning automatic segmentation of words to sub words and their acoustic models. Although the training handles the word segmentation as hidden variable which depends also on the acoustic representations, during the decoding only maximum approximation is used. The authors present nice improvements over character based results, however they did not compare results with word segmentation which does not assume the dependency on acoustic. Obviously, only text based segmentation would result in two (but simpler) independent tasks. In order to extract such segmentation several publicly open tools are available and should be cited. Some of those tools can also exploit the unigram probabilities of the words to perform their segmentations. It looks that the improvements come from the longer acoustical units - longer acoustical constraints which could lead to less confused search -, pointing towards full word models. In another way, less tokens are more probable due to less multiplication of probabilities. As a thought experiment for an extreme case: if all the possible segmentations would be possible (mixture of all word fragments, characters, and full-words), would the proposed model use word fragments at all? (WSJ is a closed vocabulary task). It would be good to show that the sub word model could outperform even a full-word model (no segmentation). Your model estimates p(z_t|x,z<t;\theta), but during training both p(z_t|x,y,z<t;\theta) and p(z_t|x,z<t;\theta) are needed. Their connections and calculations, in general Section 2 should be more detailed. The experimental setup should be presented in a replicable way. The authors use one sample of Z during the gradient calculation. Would not it be more consistent then to use the maximum instead of sampling? Or is even getting the maximum is computationally too expensive? Considering the left-to-right approximation, this might be possible and more consistent with your decoding, please address this issue in details.",
"role": "assistant"
}
]
| [
"Interesting paper which proposes jointly learning automatic segmentation of words to sub words and their acoustic models.",
"Although the training handles the word segmentation as hidden variable which depends also on the acoustic representations, during the decoding only maximum approximation is used.",
"The authors present nice improvements over character based results, however they did not compare results with word segmentation which does not assume the dependency on acoustic.",
"Obviously, only text based segmentation would result in two (but simpler) independent tasks.",
"In order to extract such segmentation several publicly open tools are available and should be cited.",
"Some of those tools can also exploit the unigram probabilities of the words to perform their segmentations.",
"It looks that the improvements come from the longer acoustical units - longer acoustical constraints which could lead to less confused search -, pointing towards full word models.",
"In another way, less tokens are more probable due to less multiplication of probabilities.",
"As a thought experiment for an extreme case: if all the possible segmentations would be possible (mixture of all word fragments, characters, and full-words), would the proposed model use word fragments at all?",
"(WSJ is a closed vocabulary task).",
"It would be good to show that the sub word model could outperform even a full-word model (no segmentation).",
"Your model estimates p(z_t|x,z<t;\theta), but during training both p(z_t|x,y,z<t;\theta) and p(z_t|x,z<t;\theta) are needed.",
"Their connections and calculations, in general Section 2 should be more detailed.",
"The experimental setup should be presented in a replicable way.",
"The authors use one sample of Z during the gradient calculation.",
"Would not it be more consistent then to use the maximum instead of sampling?",
"Or is even getting the maximum is computationally too expensive?",
"Considering the left-to-right approximation, this might be possible and more consistent with your decoding, please address this issue in details."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 14,
"praise": 2,
"presentation_and_reporting": 12,
"results_and_discussion": 3,
"suggestion_and_solution": 7,
"total": 18
} | 0.055556 | 0 | 0.055556 | 0.777778 | 0.111111 | 0.666667 | 0.166667 | 0.388889 | 18 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.222222 | 1.827661 | 0.394561 |
Latent Sequence Decompositions | Sequence-to-sequence models rely on a fixed decomposition of the target sequences into a sequence of tokens that may be words, word-pieces or characters. The choice of these tokens and the decomposition of the target sequences into a sequence of tokens is often static, and independent of the input, output data domains. This can potentially lead to a sub-optimal choice of token dictionaries, as the decomposition is not informed by the particular problem being solved. In this paper we present Latent Sequence Decompositions (LSD), a framework in which the decomposition of sequences into constituent tokens is learnt during the training of the model. The decomposition depends both on the input sequence and on the output sequence. In LSD, during training, the model samples decompositions incrementally, from left to right by locally sampling between valid extensions. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9% WER compared to a character baseline of 14.8% WER. When combined with a convolutional network on the encoder, we achieve a WER of 9.6%. | This submission proposes to learn the word decomposition, or word to sub-word sequence mapping jointly with the attention based sequence-to-sequence model. A particular feature of this approach is that the decomposition is not static, instead, it also conditions on the acoustic input, and the mapping is probabilistic, i.e., one word may map to multiple sub-word sequences. The authors argue that the dynamic decomposition approach can more naturally reflect the acoustic pattern. Interestingly, the motivation behind this approach is analogous to learning the pronunciation mixture model for HMM based speech recognition, where the probabilistic mapping from a word to its pronunciations also conditions on the acoustic input, e.g., I. McGraw, I. Badr, and J. Glass, *Learning lexicons form speech using a pronunciation mixture model,* in IEEE Transactions on Audio, Speech, and Language Processing, 2013 L. Lu, A. Ghoshal, S. Renals, *Acoustic data-driven pronunciation lexicon for large vocabulary speech recognition*, in Proc. ASRU R. Singh, B. Raj, and R. Stern, *Automatic generation of subword units for speech recognition systems,* in IEEE Transactions on Speech and Audio Processing, 2002 It would be interesting to put this work in the context by linking it to some previous works in the HMM framework. Overall, the paper is well written, and it is theoretically convincing. The experimental study could be more solid, e.g., it is reasonable to have a word-level baseline, as the proposed approach lies in between the character-level and word-level systems. the vocabulary size of the WSJ si284 dataset is 20K at maximum, which is not very large for the softmax layer, and it is a closed vocabulary task. I guess the word-level system may be also competitive to the numbers reported in this paper. Furthermore, can you explain what is the computational bottleneck of the proposed approach? You downsampled the data by the factor of 4 using an RNN, and it still took around 5 days to converge. To me, it is a bit expensive, especially given that you only take one sample when computing the gradient. Table 2 is a little bit misleading, as CTC with language model and seq2seq with a language model model from Bahdanau et al. is much closer to the best number reported in this Table 2, while you may only get a very small improvement using a language model. Finally, *O(5) days to converge* sounds a bit odd to me. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLatent Sequence Decompositions\nSequence-to-sequence models rely on a fixed decomposition of the target sequences into a sequence of tokens that may be words, word-pieces or characters. The choice of these tokens and the decomposition of the target sequences into a sequence of tokens is often static, and independent of the input, output data domains. This can potentially lead to a sub-optimal choice of token dictionaries, as the decomposition is not informed by the particular problem being solved. In this paper we present Latent Sequence Decompositions (LSD), a framework in which the decomposition of sequences into constituent tokens is learnt during the training of the model. The decomposition depends both on the input sequence and on the output sequence. In LSD, during training, the model samples decompositions incrementally, from left to right by locally sampling between valid extensions. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9% WER compared to a character baseline of 14.8% WER. When combined with a convolutional network on the encoder, we achieve a WER of 9.6%.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis submission proposes to learn the word decomposition, or word to sub-word sequence mapping jointly with the attention based sequence-to-sequence model. A particular feature of this approach is that the decomposition is not static, instead, it also conditions on the acoustic input, and the mapping is probabilistic, i.e., one word may map to multiple sub-word sequences. The authors argue that the dynamic decomposition approach can more naturally reflect the acoustic pattern. Interestingly, the motivation behind this approach is analogous to learning the pronunciation mixture model for HMM based speech recognition, where the probabilistic mapping from a word to its pronunciations also conditions on the acoustic input, e.g., I. McGraw, I. Badr, and J. Glass, *Learning lexicons form speech using a pronunciation mixture model,* in IEEE Transactions on Audio, Speech, and Language Processing, 2013 L. Lu, A. Ghoshal, S. Renals, *Acoustic data-driven pronunciation lexicon for large vocabulary speech recognition*, in Proc. ASRU R. Singh, B. Raj, and R. Stern, *Automatic generation of subword units for speech recognition systems,* in IEEE Transactions on Speech and Audio Processing, 2002 It would be interesting to put this work in the context by linking it to some previous works in the HMM framework. Overall, the paper is well written, and it is theoretically convincing. The experimental study could be more solid, e.g., it is reasonable to have a word-level baseline, as the proposed approach lies in between the character-level and word-level systems. the vocabulary size of the WSJ si284 dataset is 20K at maximum, which is not very large for the softmax layer, and it is a closed vocabulary task. I guess the word-level system may be also competitive to the numbers reported in this paper. Furthermore, can you explain what is the computational bottleneck of the proposed approach? You downsampled the data by the factor of 4 using an RNN, and it still took around 5 days to converge. To me, it is a bit expensive, especially given that you only take one sample when computing the gradient. Table 2 is a little bit misleading, as CTC with language model and seq2seq with a language model model from Bahdanau et al. is much closer to the best number reported in this Table 2, while you may only get a very small improvement using a language model. Finally, *O(5) days to converge* sounds a bit odd to me.",
"role": "assistant"
}
]
| [
"This submission proposes to learn the word decomposition, or word to sub-word sequence mapping jointly with the attention based sequence-to-sequence model.",
"A particular feature of this approach is that the decomposition is not static, instead, it also conditions on the acoustic input, and the mapping is probabilistic, i.e., one word may map to multiple sub-word sequences.",
"The authors argue that the dynamic decomposition approach can more naturally reflect the acoustic pattern.",
"Interestingly, the motivation behind this approach is analogous to learning the pronunciation mixture model for HMM based speech recognition, where the probabilistic mapping from a word to its pronunciations also conditions on the acoustic input, e.g., I. McGraw, I. Badr, and J.",
"Glass, *Learning lexicons form speech using a pronunciation mixture model,* in IEEE Transactions on Audio, Speech, and Language Processing, 2013 L. Lu, A. Ghoshal, S. Renals, *Acoustic data-driven pronunciation lexicon for large vocabulary speech recognition*, in Proc.",
"ASRU R. Singh, B. Raj, and R. Stern, *Automatic generation of subword units for speech recognition systems,* in IEEE Transactions on Speech and Audio Processing, 2002 It would be interesting to put this work in the context by linking it to some previous works in the HMM framework.",
"Overall, the paper is well written, and it is theoretically convincing.",
"The experimental study could be more solid, e.g., it is reasonable to have a word-level baseline, as the proposed approach lies in between the character-level and word-level systems.",
"the vocabulary size of the WSJ si284 dataset is 20K at maximum, which is not very large for the softmax layer, and it is a closed vocabulary task.",
"I guess the word-level system may be also competitive to the numbers reported in this paper.",
"Furthermore, can you explain what is the computational bottleneck of the proposed approach?",
"You downsampled the data by the factor of 4 using an RNN, and it still took around 5 days to converge.",
"To me, it is a bit expensive, especially given that you only take one sample when computing the gradient.",
"Table 2 is a little bit misleading, as CTC with language model and seq2seq with a language model model from Bahdanau et al.",
"is much closer to the best number reported in this Table 2, while you may only get a very small improvement using a language model.",
"Finally, *O(5) days to converge* sounds a bit odd to me."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 14,
"praise": 1,
"presentation_and_reporting": 8,
"results_and_discussion": 3,
"suggestion_and_solution": 4,
"total": 16
} | 0.1875 | 0 | 0.125 | 0.875 | 0.0625 | 0.5 | 0.1875 | 0.25 | 16 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.1875 | 2.045458 | 0.142042 |
Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQL’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning. | This is a very nicely written paper which unifies some value-based and policy-based (regularized policy gradient) methods, by pointing out connections between the value function and policy which have not been established before. The theoretical results are new and insightful, and will likely be useful in the RL field much beyond the specific algorithm being proposed in the paper. This being said, the paper does exploit the theory to produce a unified version of Q-learning and policy gradient, which proves to work on par or better than the state-of-art algorithms on the Atari suite. The empirical section is very well explained in terms of what optimization were done. One minor comment I had was related to the stationary distribution used for a policy - there are subtleties here between using a discounted vs non-discounted distribution which are not crucial in the tabular case, but will need to be addressed in the long run in the function approximation case. This being said, there is no major problem for the current version of the paper. Overall, the paper is definitely worthy of acceptance, and will likely influence a broad swath of RL, as it opens the door to further theoretical results as well as algorithm development. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCombining policy gradient and Q-learning\nPolicy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQL’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is a very nicely written paper which unifies some value-based and policy-based (regularized policy gradient) methods, by pointing out connections between the value function and policy which have not been established before. The theoretical results are new and insightful, and will likely be useful in the RL field much beyond the specific algorithm being proposed in the paper. This being said, the paper does exploit the theory to produce a unified version of Q-learning and policy gradient, which proves to work on par or better than the state-of-art algorithms on the Atari suite. The empirical section is very well explained in terms of what optimization were done. One minor comment I had was related to the stationary distribution used for a policy - there are subtleties here between using a discounted vs non-discounted distribution which are not crucial in the tabular case, but will need to be addressed in the long run in the function approximation case. This being said, there is no major problem for the current version of the paper. Overall, the paper is definitely worthy of acceptance, and will likely influence a broad swath of RL, as it opens the door to further theoretical results as well as algorithm development.",
"role": "assistant"
}
]
| [
"This is a very nicely written paper which unifies some value-based and policy-based (regularized policy gradient) methods, by pointing out connections between the value function and policy which have not been established before.",
"The theoretical results are new and insightful, and will likely be useful in the RL field much beyond the specific algorithm being proposed in the paper.",
"This being said, the paper does exploit the theory to produce a unified version of Q-learning and policy gradient, which proves to work on par or better than the state-of-art algorithms on the Atari suite.",
"The empirical section is very well explained in terms of what optimization were done.",
"One minor comment I had was related to the stationary distribution used for a policy - there are subtleties here between using a discounted vs non-discounted distribution which are not crucial in the tabular case, but will need to be addressed in the long run in the function approximation case.",
"This being said, there is no major problem for the current version of the paper.",
"Overall, the paper is definitely worthy of acceptance, and will likely influence a broad swath of RL, as it opens the door to further theoretical results as well as algorithm development."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 7
} | 0.142857 | 0 | 0.285714 | 0.857143 | 0.571429 | 0.428571 | 0.428571 | 0.142857 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.857143 | 2.288975 | 0.568167 |
Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQL’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning. | This paper shows how policy gradient and Q-Learning may be combined together, improving learning as demonstrated in particular in the Atari Learning Environment. The core idea is to note that entropy-regularized policy gradient leads to a Boltzmann policy based on Q values, thus linking pi & Q together and allowing both policy gradient and Q-Learning updates to be applied. I think this is a very interesting paper, not just for its results and the proposed algorithm (dubbed PGQ), but mostly because of the links it draws between several techniques, which I found quite insightful. That being said, I also believe it could have done a better job at clearly exposing these links: I found it somewhat difficult to follow, and it took me a while to wrap my head around it, even though the underlying concepts are not that complex. In particular: - The notation ilde{Q}^pi is introduced in a way that is not very clear, as *an estimate of the Q-values* while eq. 5 is an exact equality (no estimation) - It is not clear to me what section 3.2 is bringing exactly, I wonder if it could just be removed to expand some other sections with more explanations. - The links to dueling networks (Wang et al, 2016) are in my opinion not explicit and detailed enough (in 3.3 & 4.1): as far as I can tell the proposed architecture ends up being very similar to such networks and thus it would be worth telling more about it (also in experiments my understanding is that the *variant of asynchronous deep Q-learning* being used is essentially such a dueling network, but it is not clearly stated). I also believe it should be mentioned that PGQ can also be seen as combining Q-Learning with n-step expected Sarsa using a dueling network: this kind of example helps better understand the links between methods - Overall I wish section 3.3 was clearer, as it draws some very interesting links, but it is hard to see where this is all going when reading the paper for the first time. One confusing point is w.r.t. to the relationship with section 3.2, that assumes a critic outputting Q values while in 3.3 the critic outputs V. The *mu* distribution also comes somewhat out of nowhere. I hope the authors can try and improve the readability of the paper in a final version, as well as clarify the points raised in pre-review questions (in particular related to experimental details, the derivation of eq. 4, and the issue of the discounted distribution of states). Minor remarks: - The function r(s, a) used in the Bellman equation in section 2 is not formally defined. It looks a bit weird because the expectation is on s* and b* but r(s, a) does not depend on them (so either it should be moved out of the expectation, or the expectation should also be over the reward, depending on how r is defined) - The definition of the Boltzmann policy at end of 2.1 is a bit confusing since there is a sum over *a* of a quantity that does not depend (clearly) on *a* - I believe 4.3 is for the tabular case but this is not clearly stated - Any idea why in Fig. 1 the 3 algorithms do not all converge to the same policy? In such a simple toy setting I would expect it to be the case. Typos: - *we refer to the classic text Sutton & Barto (1998)* => missing *by*? - *Online policy gradient typically require an estimate of the action-values function* => requires & value - *the agent generates experience from interacting the environment* => with the environment - in eq. 12 (first line) there is a comma to remove near the end, just before the dlog pi - *allowing us the spread the influence of a reward* => to spread - *in the off-policy case tabular case* => remove the first case - *The green line is Q-learning where at the step an update is performed* => at each step - In Fig. 2 it says A2C instead of A3C NB: I did not have time to carefully read Appendix A | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCombining policy gradient and Q-learning\nPolicy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQL’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper shows how policy gradient and Q-Learning may be combined together, improving learning as demonstrated in particular in the Atari Learning Environment. The core idea is to note that entropy-regularized policy gradient leads to a Boltzmann policy based on Q values, thus linking pi & Q together and allowing both policy gradient and Q-Learning updates to be applied. I think this is a very interesting paper, not just for its results and the proposed algorithm (dubbed PGQ), but mostly because of the links it draws between several techniques, which I found quite insightful. That being said, I also believe it could have done a better job at clearly exposing these links: I found it somewhat difficult to follow, and it took me a while to wrap my head around it, even though the underlying concepts are not that complex. In particular: - The notation \tilde{Q}^pi is introduced in a way that is not very clear, as *an estimate of the Q-values* while eq. 5 is an exact equality (no estimation) - It is not clear to me what section 3.2 is bringing exactly, I wonder if it could just be removed to expand some other sections with more explanations. - The links to dueling networks (Wang et al, 2016) are in my opinion not explicit and detailed enough (in 3.3 & 4.1): as far as I can tell the proposed architecture ends up being very similar to such networks and thus it would be worth telling more about it (also in experiments my understanding is that the *variant of asynchronous deep Q-learning* being used is essentially such a dueling network, but it is not clearly stated). I also believe it should be mentioned that PGQ can also be seen as combining Q-Learning with n-step expected Sarsa using a dueling network: this kind of example helps better understand the links between methods - Overall I wish section 3.3 was clearer, as it draws some very interesting links, but it is hard to see where this is all going when reading the paper for the first time. One confusing point is w.r.t. to the relationship with section 3.2, that assumes a critic outputting Q values while in 3.3 the critic outputs V. The *mu* distribution also comes somewhat out of nowhere. I hope the authors can try and improve the readability of the paper in a final version, as well as clarify the points raised in pre-review questions (in particular related to experimental details, the derivation of eq. 4, and the issue of the discounted distribution of states). Minor remarks: - The function r(s, a) used in the Bellman equation in section 2 is not formally defined. It looks a bit weird because the expectation is on s* and b* but r(s, a) does not depend on them (so either it should be moved out of the expectation, or the expectation should also be over the reward, depending on how r is defined) - The definition of the Boltzmann policy at end of 2.1 is a bit confusing since there is a sum over *a* of a quantity that does not depend (clearly) on *a* - I believe 4.3 is for the tabular case but this is not clearly stated - Any idea why in Fig. 1 the 3 algorithms do not all converge to the same policy? In such a simple toy setting I would expect it to be the case. Typos: - *we refer to the classic text Sutton & Barto (1998)* => missing *by*? - *Online policy gradient typically require an estimate of the action-values function* => requires & value - *the agent generates experience from interacting the environment* => with the environment - in eq. 12 (first line) there is a comma to remove near the end, just before the dlog pi - *allowing us the spread the influence of a reward* => to spread - *in the off-policy case tabular case* => remove the first case - *The green line is Q-learning where at the step an update is performed* => at each step - In Fig. 2 it says A2C instead of A3C NB: I did not have time to carefully read Appendix A",
"role": "assistant"
}
]
| [
"This paper shows how policy gradient and Q-Learning may be combined together, improving learning as demonstrated in particular in the Atari Learning Environment.",
"The core idea is to note that entropy-regularized policy gradient leads to a Boltzmann policy based on Q values, thus linking pi & Q together and allowing both policy gradient and Q-Learning updates to be applied.",
"I think this is a very interesting paper, not just for its results and the proposed algorithm (dubbed PGQ), but mostly because of the links it draws between several techniques, which I found quite insightful.",
"That being said, I also believe it could have done a better job at clearly exposing these links: I found it somewhat difficult to follow, and it took me a while to wrap my head around it, even though the underlying concepts are not that complex.",
"In particular: - The notation \tilde{Q}^pi is introduced in a way that is not very clear, as *an estimate of the Q-values* while eq.",
"5 is an exact equality (no estimation) - It is not clear to me what section 3.2 is bringing exactly, I wonder if it could just be removed to expand some other sections with more explanations.",
"- The links to dueling networks (Wang et al, 2016) are in my opinion not explicit and detailed enough (in 3.3 & 4.1): as far as I can tell the proposed architecture ends up being very similar to such networks and thus it would be worth telling more about it (also in experiments my understanding is that the *variant of asynchronous deep Q-learning* being used is essentially such a dueling network, but it is not clearly stated).",
"I also believe it should be mentioned that PGQ can also be seen as combining Q-Learning with n-step expected Sarsa using a dueling network: this kind of example helps better understand the links between methods - Overall I wish section 3.3 was clearer, as it draws some very interesting links, but it is hard to see where this is all going when reading the paper for the first time.",
"One confusing point is w.r.t.",
"to the relationship with section 3.2, that assumes a critic outputting Q values while in 3.3 the critic outputs V. The *mu* distribution also comes somewhat out of nowhere.",
"I hope the authors can try and improve the readability of the paper in a final version, as well as clarify the points raised in pre-review questions (in particular related to experimental details, the derivation of eq.",
"4, and the issue of the discounted distribution of states).",
"Minor remarks: - The function r(s, a) used in the Bellman equation in section 2 is not formally defined.",
"It looks a bit weird because the expectation is on s* and b* but r(s, a) does not depend on them (so either it should be moved out of the expectation, or the expectation should also be over the reward, depending on how r is defined) - The definition of the Boltzmann policy at end of 2.1 is a bit confusing since there is a sum over *a* of a quantity that does not depend (clearly) on *a* - I believe 4.3 is for the tabular case but this is not clearly stated - Any idea why in Fig.",
"1 the 3 algorithms do not all converge to the same policy?",
"In such a simple toy setting I would expect it to be the case.",
"Typos: - *we refer to the classic text Sutton & Barto (1998)* => missing *by*?",
"- *Online policy gradient typically require an estimate of the action-values function* => requires & value - *the agent generates experience from interacting the environment* => with the environment - in eq.",
"12 (first line) there is a comma to remove near the end, just before the dlog pi - *allowing us the spread the influence of a reward* => to spread - *in the off-policy case tabular case* => remove the first case - *The green line is Q-learning where at the step an update is performed* => at each step - In Fig.",
"2 it says A2C instead of A3C NB: I did not have time to carefully read Appendix A"
] | {
"criticism": 10,
"example": 5,
"importance_and_relevance": 1,
"materials_and_methods": 10,
"praise": 2,
"presentation_and_reporting": 9,
"results_and_discussion": 4,
"suggestion_and_solution": 5,
"total": 20
} | 0.5 | 0.25 | 0.05 | 0.5 | 0.1 | 0.45 | 0.2 | 0.25 | 20 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.3 | 1.526661 | 0.773339 |
Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation and latent variable manipulations. | This paper presents a clever way of training a generative model which allows for exact inference, sampling and log likelihood evaluation. The main idea here is to make the Jacobian that comes when using the change of variables formula (from data to latents) triangular - this makes the determinant easy to calculate and hence learning possible. The paper nicely presents this core idea and a way to achieve this - by choosing special *routings* between the latents and data such that part of the transformation is identity and part some complex function of the input (a deep net, for example) the resulting Jacobian has a tractable structure. This routing can be cascaded to achieve even more complex transformation. On the experimental side, the model is trained on several datasets and the results are quite convincing, both in sample quality and quantitive measures. I would be very happy to see if this model is useful with other types of tasks and if the resulting latent representation help with classification or inference such as image restoration. In summary - the paper is nicely written, results are quite good and the model is interesting - I*m happy to recommend acceptance. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDensity estimation using Real NVP\nUnsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation and latent variable manipulations.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a clever way of training a generative model which allows for exact inference, sampling and log likelihood evaluation. The main idea here is to make the Jacobian that comes when using the change of variables formula (from data to latents) triangular - this makes the determinant easy to calculate and hence learning possible. The paper nicely presents this core idea and a way to achieve this - by choosing special *routings* between the latents and data such that part of the transformation is identity and part some complex function of the input (a deep net, for example) the resulting Jacobian has a tractable structure. This routing can be cascaded to achieve even more complex transformation. On the experimental side, the model is trained on several datasets and the results are quite convincing, both in sample quality and quantitive measures. I would be very happy to see if this model is useful with other types of tasks and if the resulting latent representation help with classification or inference such as image restoration. In summary - the paper is nicely written, results are quite good and the model is interesting - I*m happy to recommend acceptance.",
"role": "assistant"
}
]
| [
"This paper presents a clever way of training a generative model which allows for exact inference, sampling and log likelihood evaluation.",
"The main idea here is to make the Jacobian that comes when using the change of variables formula (from data to latents) triangular - this makes the determinant easy to calculate and hence learning possible.",
"The paper nicely presents this core idea and a way to achieve this - by choosing special *routings* between the latents and data such that part of the transformation is identity and part some complex function of the input (a deep net, for example) the resulting Jacobian has a tractable structure.",
"This routing can be cascaded to achieve even more complex transformation.",
"On the experimental side, the model is trained on several datasets and the results are quite convincing, both in sample quality and quantitive measures.",
"I would be very happy to see if this model is useful with other types of tasks and if the resulting latent representation help with classification or inference such as image restoration.",
"In summary - the paper is nicely written, results are quite good and the model is interesting - I*m happy to recommend acceptance."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 7,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 6,
"suggestion_and_solution": 2,
"total": 7
} | 0 | 0 | 0.428571 | 1 | 0.428571 | 0.142857 | 0.857143 | 0.285714 | 7 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3.142857 | 2.57469 | 0.568167 |
Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization. | This paper extends batch normalization successfully to RNNs where batch normalization has previously failed or done poorly. The experiments and datasets tackled show definitively the improvement that batch norm LSTMs provide over standard LSTMs. They also cover a variety of examples, including character level (PTB and Text8), word level (CNN question-answering task), and pixel level (MNIST and pMNIST). The supplied training curves also quite clearly show the potential improvements in training time which is an important metric for consideration. The experiment on pMNIST also solidly shows the advantage of batch norm in the recurrent setting for establishing long term dependencies. I additionally also appreciated the gradient flow insight, specifically the impact of unit variance on tanh derivatives. Showing it not just for batch normalization but additionally the *toy task* (Figure 1b) was hugely useful. Overall I find this paper a useful additional contribution to the usage of batch normalization and would be necessary information for successfully employing it in a recurrent setting. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRecurrent Batch Normalization\nWe propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper extends batch normalization successfully to RNNs where batch normalization has previously failed or done poorly. The experiments and datasets tackled show definitively the improvement that batch norm LSTMs provide over standard LSTMs. They also cover a variety of examples, including character level (PTB and Text8), word level (CNN question-answering task), and pixel level (MNIST and pMNIST). The supplied training curves also quite clearly show the potential improvements in training time which is an important metric for consideration. The experiment on pMNIST also solidly shows the advantage of batch norm in the recurrent setting for establishing long term dependencies. I additionally also appreciated the gradient flow insight, specifically the impact of unit variance on tanh derivatives. Showing it not just for batch normalization but additionally the *toy task* (Figure 1b) was hugely useful. Overall I find this paper a useful additional contribution to the usage of batch normalization and would be necessary information for successfully employing it in a recurrent setting.",
"role": "assistant"
}
]
| [
"This paper extends batch normalization successfully to RNNs where batch normalization has previously failed or done poorly.",
"The experiments and datasets tackled show definitively the improvement that batch norm LSTMs provide over standard LSTMs.",
"They also cover a variety of examples, including character level (PTB and Text8), word level (CNN question-answering task), and pixel level (MNIST and pMNIST).",
"The supplied training curves also quite clearly show the potential improvements in training time which is an important metric for consideration.",
"The experiment on pMNIST also solidly shows the advantage of batch norm in the recurrent setting for establishing long term dependencies.",
"I additionally also appreciated the gradient flow insight, specifically the impact of unit variance on tanh derivatives.",
"Showing it not just for batch normalization but additionally the *toy task* (Figure 1b) was hugely useful.",
"Overall I find this paper a useful additional contribution to the usage of batch normalization and would be necessary information for successfully employing it in a recurrent setting."
] | {
"criticism": 0,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 6,
"praise": 5,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 8
} | 0 | 0.25 | 0.125 | 0.75 | 0.625 | 0.125 | 0.125 | 0.125 | 8 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14\% and 16.21\%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at url{https://github.com/loshchil/SGDR} | This an interesting investigation into learning rate schedules, bringing in the idea of restarts, often overlooked in deep learning. The paper does a thorough study on non-trivial datasets, and while the outcomes are not fully conclusive, the results are very good and the approach is novel enough to warrant publication. I thank the authors for revising the paper based on my concerns. Typos: - “flesh” -> “flush” | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSGDR: Stochastic Gradient Descent with Warm Restarts\nRestart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14\\% and 16.21\\%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at url{https://github.com/loshchil/SGDR}",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis an interesting investigation into learning rate schedules, bringing in the idea of restarts, often overlooked in deep learning. The paper does a thorough study on non-trivial datasets, and while the outcomes are not fully conclusive, the results are very good and the approach is novel enough to warrant publication. I thank the authors for revising the paper based on my concerns. Typos: - “flesh” -> “flush”",
"role": "assistant"
}
]
| [
"This an interesting investigation into learning rate schedules, bringing in the idea of restarts, often overlooked in deep learning.",
"The paper does a thorough study on non-trivial datasets, and while the outcomes are not fully conclusive, the results are very good and the approach is novel enough to warrant publication.",
"I thank the authors for revising the paper based on my concerns.",
"Typos: - “flesh” -> “flush”"
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 1,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 4
} | 0.25 | 0.25 | 0.5 | 0.25 | 0.75 | 0.25 | 0.25 | 0.25 | 4 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.75 | 1.471623 | 1.278377 |
SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14\% and 16.21\%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at url{https://github.com/loshchil/SGDR} | This paper describes a way to speed up convergence through sudden increases of otherwise monotonically decreasing learning rates. Several techniques are presented in a clear way and parameterized method is proposed and evaluated on the CIFAR task. The concept is easy to understand and the authors chose state-of-the-art models to show the performance of their algorithm. The relevance of these results goes beyond image classification. Pros: - Simple and effective method to improve convergence - Good evaluation on well known database Cons: - Connection of introduction and topic of the paper is a bit unclear - Fig 2, 4 and 5 are hard to read. Lines are out of bounds and maybe only the best setting for T_0 and T_mult would be clearer. The baseline also doesn*t seem to converge Remarks: An loss surface for T_0 against T_mult would be very helpful. Also understanding the relationship of network depth and the performance of this method would add value to this analysis. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSGDR: Stochastic Gradient Descent with Warm Restarts\nRestart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14\\% and 16.21\\%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at url{https://github.com/loshchil/SGDR}",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper describes a way to speed up convergence through sudden increases of otherwise monotonically decreasing learning rates. Several techniques are presented in a clear way and parameterized method is proposed and evaluated on the CIFAR task. The concept is easy to understand and the authors chose state-of-the-art models to show the performance of their algorithm. The relevance of these results goes beyond image classification. Pros: - Simple and effective method to improve convergence - Good evaluation on well known database Cons: - Connection of introduction and topic of the paper is a bit unclear - Fig 2, 4 and 5 are hard to read. Lines are out of bounds and maybe only the best setting for T_0 and T_mult would be clearer. The baseline also doesn*t seem to converge Remarks: An loss surface for T_0 against T_mult would be very helpful. Also understanding the relationship of network depth and the performance of this method would add value to this analysis.",
"role": "assistant"
}
]
| [
"This paper describes a way to speed up convergence through sudden increases of otherwise monotonically decreasing learning rates.",
"Several techniques are presented in a clear way and parameterized method is proposed and evaluated on the CIFAR task.",
"The concept is easy to understand and the authors chose state-of-the-art models to show the performance of their algorithm.",
"The relevance of these results goes beyond image classification.",
"Pros: - Simple and effective method to improve convergence - Good evaluation on well known database Cons: - Connection of introduction and topic of the paper is a bit unclear - Fig 2, 4 and 5 are hard to read.",
"Lines are out of bounds and maybe only the best setting for T_0 and T_mult would be clearer.",
"The baseline also doesn*t seem to converge Remarks: An loss surface for T_0 against T_mult would be very helpful.",
"Also understanding the relationship of network depth and the performance of this method would add value to this analysis."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 5,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 8
} | 0.125 | 0.125 | 0 | 0.625 | 0.375 | 0.375 | 0.375 | 0.375 | 8 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.375 | 1.980439 | 0.394561 |
Reinforcement Learning through Asynchronous Advantage Actor-Critic on a GPU | We introduce a hybrid CPU/GPU version of the Asynchronous Advantage Actor-Critic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. We analyze its computational traits and concentrate on aspects critical to leveraging the GPU*s computational power. We introduce a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. Our hybrid CPU/GPU version of A3C, based on TensorFlow, achieves a significant speed up compared to a CPU implementation; we make it publicly available to other researchers at https://github.com/NVlabs/GA3C. | This paper introduces a *GPU-friendly* variant of A3C which relaxes some synchronicity constraints in the original A3C algorithm to make it more friendly to a high-throughput GPU device. The analysis of the effects of this added latency is thorough. The systems analysis of the algorithm is extensive. One caveat is that the performance figures in Table 3 are hard to compare since the protocols vary so much. I understand that DeepMind didn*t provide reproducible code for A3C, but I gather from the comment that the authors have re-implemented vanilla A3C as well, in which case it would be good to show what this reimplementation of A3C achieves in the same setting used by DM, and in the setting of the experiment conducted using GA3C (1 day). It would be good to clarify in the text that the experimental protocol differed (top 5 out of 50 vs single run), and clarify why the discrepancy, even if the answer is that the authors didn*t have time / resources to reproduce the same protocol. A bit more care would go a long way to establishing that indeed, there is no price to pay for the approximations that were made. I applaud the authors for open-sourcing the code, especially since there is a relative shortage of properly tested open-source implementations in that general area, and getting these algorithms right is non-trivial. A disclaimer: having never implemented A3C myself, I have a low confidence in my ability to appropriately assess of the algorithmic aspects of the work. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nReinforcement Learning through Asynchronous Advantage Actor-Critic on a GPU\nWe introduce a hybrid CPU/GPU version of the Asynchronous Advantage Actor-Critic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. We analyze its computational traits and concentrate on aspects critical to leveraging the GPU*s computational power. We introduce a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. Our hybrid CPU/GPU version of A3C, based on TensorFlow, achieves a significant speed up compared to a CPU implementation; we make it publicly available to other researchers at https://github.com/NVlabs/GA3C.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a *GPU-friendly* variant of A3C which relaxes some synchronicity constraints in the original A3C algorithm to make it more friendly to a high-throughput GPU device. The analysis of the effects of this added latency is thorough. The systems analysis of the algorithm is extensive. One caveat is that the performance figures in Table 3 are hard to compare since the protocols vary so much. I understand that DeepMind didn*t provide reproducible code for A3C, but I gather from the comment that the authors have re-implemented vanilla A3C as well, in which case it would be good to show what this reimplementation of A3C achieves in the same setting used by DM, and in the setting of the experiment conducted using GA3C (1 day). It would be good to clarify in the text that the experimental protocol differed (top 5 out of 50 vs single run), and clarify why the discrepancy, even if the answer is that the authors didn*t have time / resources to reproduce the same protocol. A bit more care would go a long way to establishing that indeed, there is no price to pay for the approximations that were made. I applaud the authors for open-sourcing the code, especially since there is a relative shortage of properly tested open-source implementations in that general area, and getting these algorithms right is non-trivial. A disclaimer: having never implemented A3C myself, I have a low confidence in my ability to appropriately assess of the algorithmic aspects of the work.",
"role": "assistant"
}
]
| [
"This paper introduces a *GPU-friendly* variant of A3C which relaxes some synchronicity constraints in the original A3C algorithm to make it more friendly to a high-throughput GPU device.",
"The analysis of the effects of this added latency is thorough.",
"The systems analysis of the algorithm is extensive.",
"One caveat is that the performance figures in Table 3 are hard to compare since the protocols vary so much.",
"I understand that DeepMind didn*t provide reproducible code for A3C, but I gather from the comment that the authors have re-implemented vanilla A3C as well, in which case it would be good to show what this reimplementation of A3C achieves in the same setting used by DM, and in the setting of the experiment conducted using GA3C (1 day).",
"It would be good to clarify in the text that the experimental protocol differed (top 5 out of 50 vs single run), and clarify why the discrepancy, even if the answer is that the authors didn*t have time / resources to reproduce the same protocol.",
"A bit more care would go a long way to establishing that indeed, there is no price to pay for the approximations that were made.",
"I applaud the authors for open-sourcing the code, especially since there is a relative shortage of properly tested open-source implementations in that general area, and getting these algorithms right is non-trivial.",
"A disclaimer: having never implemented A3C myself, I have a low confidence in my ability to appropriately assess of the algorithmic aspects of the work."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 8,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 9
} | 0.333333 | 0.111111 | 0 | 0.888889 | 0.222222 | 0.111111 | 0.333333 | 0.333333 | 9 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.080814 | 0.252519 |
Automatic Rule Extraction from Long Short Term Memory Networks | Although deep learning models have proven effective at solving problems in natural language processing, the mechanism by which they come to their conclusions is often unclear. As a result, these models are generally treated as black boxes, yielding no insight of the underlying learned patterns. In this paper we consider Long Short Term Memory networks (LSTMs) and demonstrate a new approach for tracking the importance of a given input to the LSTM for a given output. By identifying consistently important patterns of words, we are able to distill state of the art LSTMs on sentiment analysis and question answering into a set of representative phrases. This representation is then quantitatively validated by using the extracted phrases to construct a simple, rule-based classifier which approximates the output of the LSTM. | This work proposes a pattern extraction method to both understand what a trained LSTM has learnt and to allow implementation of a hand-coded algorithm that performs similarly to the LSTM. Good results are shown on one dataset for one model architecture so it is unclear how well this approach will generalize, however, it seems it will be a useful way to understand and debug models. The questions in WikiMovies seem to be generated from templates and so this pattern matching approach will likely work well. However, from the experiments it*s not clear if this will extend to other types of Q&A tasks where the answer may be free form text and not be a substring in the document. Is the model required to produce a continuous span over the original document? The approach also seems to have some deficiencies in how it handles word types such as numbers or entity names. This can be encoded in the embedding for the word but from the description of the algorithm, it seems that the approach requires an entity detector. Does this mean that the approach is unable to determine when it has reached an entity from the decomposition of the output of the LSTM? The results where *manual pattern matching* where explicit year annotations are used, seem to show that the automatic method is unable to deal with word types. It would also be good to see an attention model as a baseline in addition to the gradient-based baseline. Minor comments: - P and Q seem to be undefined. - Some references seem to be bad, e.g. in section 5.1: *in 1* instead of *in table 1*. Similarly above section 7: *as shown in 3* and in section 7.1. - In the paragraph above section 6.3: *adam* -> *Adam*. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAutomatic Rule Extraction from Long Short Term Memory Networks\nAlthough deep learning models have proven effective at solving problems in natural language processing, the mechanism by which they come to their conclusions is often unclear. As a result, these models are generally treated as black boxes, yielding no insight of the underlying learned patterns. In this paper we consider Long Short Term Memory networks (LSTMs) and demonstrate a new approach for tracking the importance of a given input to the LSTM for a given output. By identifying consistently important patterns of words, we are able to distill state of the art LSTMs on sentiment analysis and question answering into a set of representative phrases. This representation is then quantitatively validated by using the extracted phrases to construct a simple, rule-based classifier which approximates the output of the LSTM.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work proposes a pattern extraction method to both understand what a trained LSTM has learnt and to allow implementation of a hand-coded algorithm that performs similarly to the LSTM. Good results are shown on one dataset for one model architecture so it is unclear how well this approach will generalize, however, it seems it will be a useful way to understand and debug models. The questions in WikiMovies seem to be generated from templates and so this pattern matching approach will likely work well. However, from the experiments it*s not clear if this will extend to other types of Q&A tasks where the answer may be free form text and not be a substring in the document. Is the model required to produce a continuous span over the original document? The approach also seems to have some deficiencies in how it handles word types such as numbers or entity names. This can be encoded in the embedding for the word but from the description of the algorithm, it seems that the approach requires an entity detector. Does this mean that the approach is unable to determine when it has reached an entity from the decomposition of the output of the LSTM? The results where *manual pattern matching* where explicit year annotations are used, seem to show that the automatic method is unable to deal with word types. It would also be good to see an attention model as a baseline in addition to the gradient-based baseline. Minor comments: - P and Q seem to be undefined. - Some references seem to be bad, e.g. in section 5.1: *in 1* instead of *in table 1*. Similarly above section 7: *as shown in 3* and in section 7.1. - In the paragraph above section 6.3: *adam* -> *Adam*.",
"role": "assistant"
}
]
| [
"This work proposes a pattern extraction method to both understand what a trained LSTM has learnt and to allow implementation of a hand-coded algorithm that performs similarly to the LSTM.",
"Good results are shown on one dataset for one model architecture so it is unclear how well this approach will generalize, however, it seems it will be a useful way to understand and debug models.",
"The questions in WikiMovies seem to be generated from templates and so this pattern matching approach will likely work well.",
"However, from the experiments it*s not clear if this will extend to other types of Q&A tasks where the answer may be free form text and not be a substring in the document.",
"Is the model required to produce a continuous span over the original document?",
"The approach also seems to have some deficiencies in how it handles word types such as numbers or entity names.",
"This can be encoded in the embedding for the word but from the description of the algorithm, it seems that the approach requires an entity detector.",
"Does this mean that the approach is unable to determine when it has reached an entity from the decomposition of the output of the LSTM?",
"The results where *manual pattern matching* where explicit year annotations are used, seem to show that the automatic method is unable to deal with word types.",
"It would also be good to see an attention model as a baseline in addition to the gradient-based baseline.",
"Minor comments: - P and Q seem to be undefined.",
"- Some references seem to be bad, e.g.",
"in section 5.1: *in 1* instead of *in table 1*.",
"Similarly above section 7: *as shown in 3* and in section 7.1.",
"- In the paragraph above section 6.3: *adam* -> *Adam*."
] | {
"criticism": 6,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 9,
"praise": 2,
"presentation_and_reporting": 9,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 15
} | 0.4 | 0.133333 | 0.066667 | 0.6 | 0.133333 | 0.6 | 0.133333 | 0.066667 | 15 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.133333 | 2.070204 | 0.06313 |
Deep Information Propagation | We study the behavior of untrained neural networks whose weights and biases are randomly distributed using mean field theory. We show the existence of depth scales that naturally limit the maximum depth of signal propagation through these random networks. Our main practical result is to show that random networks may be trained precisely when information can travel through them. Thus, the depth scales that we identify provide bounds on how deep a network may be trained for a specific choice of hyperparameters. As a corollary to this, we argue that in networks at the edge of chaos, one of these depth scales diverges. Thus arbitrarily deep networks may be trained only sufficiently close to criticality. We show that the presence of dropout destroys the order-to-chaos critical point and therefore strongly limits the maximum trainable depth for random networks. Finally, we develop a mean field theory for backpropagation and we show that the ordered and chaotic phases correspond to regions of vanishing and exploding gradient respectively. | The paper expands a recent mean-field approximation of deep random neural networks to study depth-dependent information propagation, its phase-dependence and the influence of drop-out. The paper is extremely well written, the mathematical analysis is thorough and numerical experiments are included that underscore the theoretical results. Overall the paper stands out as one of the few papers that thoroughly analyses training and performance of deep nets. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Information Propagation\nWe study the behavior of untrained neural networks whose weights and biases are randomly distributed using mean field theory. We show the existence of depth scales that naturally limit the maximum depth of signal propagation through these random networks. Our main practical result is to show that random networks may be trained precisely when information can travel through them. Thus, the depth scales that we identify provide bounds on how deep a network may be trained for a specific choice of hyperparameters. As a corollary to this, we argue that in networks at the edge of chaos, one of these depth scales diverges. Thus arbitrarily deep networks may be trained only sufficiently close to criticality. We show that the presence of dropout destroys the order-to-chaos critical point and therefore strongly limits the maximum trainable depth for random networks. Finally, we develop a mean field theory for backpropagation and we show that the ordered and chaotic phases correspond to regions of vanishing and exploding gradient respectively.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper expands a recent mean-field approximation of deep random neural networks to study depth-dependent information propagation, its phase-dependence and the influence of drop-out. The paper is extremely well written, the mathematical analysis is thorough and numerical experiments are included that underscore the theoretical results. Overall the paper stands out as one of the few papers that thoroughly analyses training and performance of deep nets.",
"role": "assistant"
}
]
| [
"The paper expands a recent mean-field approximation of deep random neural networks to study depth-dependent information propagation, its phase-dependence and the influence of drop-out.",
"The paper is extremely well written, the mathematical analysis is thorough and numerical experiments are included that underscore the theoretical results.",
"Overall the paper stands out as one of the few papers that thoroughly analyses training and performance of deep nets."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 3
} | 0 | 0 | 0 | 1 | 0.666667 | 0.333333 | 0.333333 | 0 | 3 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 2.333333 | 0.75509 | 1.578243 |
Deep Information Propagation | We study the behavior of untrained neural networks whose weights and biases are randomly distributed using mean field theory. We show the existence of depth scales that naturally limit the maximum depth of signal propagation through these random networks. Our main practical result is to show that random networks may be trained precisely when information can travel through them. Thus, the depth scales that we identify provide bounds on how deep a network may be trained for a specific choice of hyperparameters. As a corollary to this, we argue that in networks at the edge of chaos, one of these depth scales diverges. Thus arbitrarily deep networks may be trained only sufficiently close to criticality. We show that the presence of dropout destroys the order-to-chaos critical point and therefore strongly limits the maximum trainable depth for random networks. Finally, we develop a mean field theory for backpropagation and we show that the ordered and chaotic phases correspond to regions of vanishing and exploding gradient respectively. | This paper presents a mathematical analysis of how information is propagated through deep feed-forward neural networks, with novel analysis addressing the problem of vanishing and exploding gradients in the backward pass of backpropagation and the use of the dropout algorithm. The paper is clear and well-written, the analysis is thorough, and the experimental results showing agreement with the model are very nice. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Information Propagation\nWe study the behavior of untrained neural networks whose weights and biases are randomly distributed using mean field theory. We show the existence of depth scales that naturally limit the maximum depth of signal propagation through these random networks. Our main practical result is to show that random networks may be trained precisely when information can travel through them. Thus, the depth scales that we identify provide bounds on how deep a network may be trained for a specific choice of hyperparameters. As a corollary to this, we argue that in networks at the edge of chaos, one of these depth scales diverges. Thus arbitrarily deep networks may be trained only sufficiently close to criticality. We show that the presence of dropout destroys the order-to-chaos critical point and therefore strongly limits the maximum trainable depth for random networks. Finally, we develop a mean field theory for backpropagation and we show that the ordered and chaotic phases correspond to regions of vanishing and exploding gradient respectively.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a mathematical analysis of how information is propagated through deep feed-forward neural networks, with novel analysis addressing the problem of vanishing and exploding gradients in the backward pass of backpropagation and the use of the dropout algorithm. The paper is clear and well-written, the analysis is thorough, and the experimental results showing agreement with the model are very nice.",
"role": "assistant"
}
]
| [
"This paper presents a mathematical analysis of how information is propagated through deep feed-forward neural networks, with novel analysis addressing the problem of vanishing and exploding gradients in the backward pass of backpropagation and the use of the dropout algorithm.",
"The paper is clear and well-written, the analysis is thorough, and the experimental results showing agreement with the model are very nice."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 2
} | 0 | 0 | 0.5 | 1 | 1 | 0.5 | 0.5 | 0 | 2 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 3.5 | 1.590326 | 1.909674 |
FractalNet: Ultra-Deep Neural Networks without Residuals | We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer. | This paper presents a strategy for building deep neural networks via rules for expansion and merging of sub-networks. pros: - the idea is novel - the approach is described clearly cons: - the experimental evaluation is not convincing, e.g. no improvement on SVHN - number of parameters should be mentioned for all models for fair comparison - the effect of drop-path seems to vanish with data augmentation | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFractalNet: Ultra-Deep Neural Networks without Residuals\nWe introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a strategy for building deep neural networks via rules for expansion and merging of sub-networks. pros: - the idea is novel - the approach is described clearly cons: - the experimental evaluation is not convincing, e.g. no improvement on SVHN - number of parameters should be mentioned for all models for fair comparison - the effect of drop-path seems to vanish with data augmentation",
"role": "assistant"
}
]
| [
"This paper presents a strategy for building deep neural networks via rules for expansion and merging of sub-networks.",
"pros: - the idea is novel - the approach is described clearly cons: - the experimental evaluation is not convincing, e.g.",
"no improvement on SVHN - number of parameters should be mentioned for all models for fair comparison - the effect of drop-path seems to vanish with data augmentation"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 2,
"praise": 0,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0.333333 | 0 | 0.666667 | 0.666667 | 0 | 0.333333 | 0.333333 | 0.333333 | 3 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 2.666667 | 1.088424 | 1.578243 |
Revisiting Classifier Two-Sample Tests | The goal of two-sample tests is to assess whether two samples, and , are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the examples in with a positive label, and by pairing the examples in with a negative label. If the null hypothesis `` ** is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such emph{Classifier Two-Sample Tests} (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where and differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery. | The submission considers the setting of 2-sample testing from the perspective of evaluating a classifier. For a classifier between two samples from the same distribution, the distribution of the classification accuracy follows a simple form under the null hypothesis. As such, a straightforward threshold can be derived for any classifier. Finding a more powerful test then amounts to training a better classifier. One may then focus efforts, e.g. on deep neural networks, for which statistics such as the MMD may be very difficult to characterize. + The approach is sound and very general + The paper is timely in that deep learning has had huge impacts in classification and other prediction settings, but has not had as big an impact on statistical hypothesis testing as kernel methods have - The discussion of the relationship to kernel-MMD has not always been as realistic as it could have been. For example, the kernel-MMD can also be seen as a classifier based approach, so a more fair discussion could be provided. Also, the form of kernel-MMD used in the comparisons is a bit contradictory to the discussion as well * The linear kernel-MMD is used which is less powerful than the quadradic kernel-MMD (the authors have justified this from the perspective of computation time) * The kernel-MMD is argued against due to its unwieldy distribution under the null, but the linear time kernel-MMD (see also Zaremba et al., NIPS 2013) has a Gaussian distribution under the null. Arthur Gretton*s comment from Dec 14 during the discussion period was very insightful and helpful. If these insights and additional experiments comparing the kernel-MMD to the classifier threshold on the blobs dataset could be included, that would be very helpful for understanding the paper. The open review format gives an excellent opportunity to assign proper credit for these experiments and insights by citing the comment. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRevisiting Classifier Two-Sample Tests\nThe goal of two-sample tests is to assess whether two samples, and , are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the examples in with a positive label, and by pairing the examples in with a negative label. If the null hypothesis `` ** is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such emph{Classifier Two-Sample Tests} (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where and differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe submission considers the setting of 2-sample testing from the perspective of evaluating a classifier. For a classifier between two samples from the same distribution, the distribution of the classification accuracy follows a simple form under the null hypothesis. As such, a straightforward threshold can be derived for any classifier. Finding a more powerful test then amounts to training a better classifier. One may then focus efforts, e.g. on deep neural networks, for which statistics such as the MMD may be very difficult to characterize. + The approach is sound and very general + The paper is timely in that deep learning has had huge impacts in classification and other prediction settings, but has not had as big an impact on statistical hypothesis testing as kernel methods have - The discussion of the relationship to kernel-MMD has not always been as realistic as it could have been. For example, the kernel-MMD can also be seen as a classifier based approach, so a more fair discussion could be provided. Also, the form of kernel-MMD used in the comparisons is a bit contradictory to the discussion as well * The linear kernel-MMD is used which is less powerful than the quadradic kernel-MMD (the authors have justified this from the perspective of computation time) * The kernel-MMD is argued against due to its unwieldy distribution under the null, but the linear time kernel-MMD (see also Zaremba et al., NIPS 2013) has a Gaussian distribution under the null. Arthur Gretton*s comment from Dec 14 during the discussion period was very insightful and helpful. If these insights and additional experiments comparing the kernel-MMD to the classifier threshold on the blobs dataset could be included, that would be very helpful for understanding the paper. The open review format gives an excellent opportunity to assign proper credit for these experiments and insights by citing the comment.",
"role": "assistant"
}
]
| [
"The submission considers the setting of 2-sample testing from the perspective of evaluating a classifier.",
"For a classifier between two samples from the same distribution, the distribution of the classification accuracy follows a simple form under the null hypothesis.",
"As such, a straightforward threshold can be derived for any classifier.",
"Finding a more powerful test then amounts to training a better classifier.",
"One may then focus efforts, e.g.",
"on deep neural networks, for which statistics such as the MMD may be very difficult to characterize.",
"+ The approach is sound and very general + The paper is timely in that deep learning has had huge impacts in classification and other prediction settings, but has not had as big an impact on statistical hypothesis testing as kernel methods have - The discussion of the relationship to kernel-MMD has not always been as realistic as it could have been.",
"For example, the kernel-MMD can also be seen as a classifier based approach, so a more fair discussion could be provided.",
"Also, the form of kernel-MMD used in the comparisons is a bit contradictory to the discussion as well * The linear kernel-MMD is used which is less powerful than the quadradic kernel-MMD (the authors have justified this from the perspective of computation time) * The kernel-MMD is argued against due to its unwieldy distribution under the null, but the linear time kernel-MMD (see also Zaremba et al., NIPS 2013) has a Gaussian distribution under the null.",
"Arthur Gretton*s comment from Dec 14 during the discussion period was very insightful and helpful.",
"If these insights and additional experiments comparing the kernel-MMD to the classifier threshold on the blobs dataset could be included, that would be very helpful for understanding the paper.",
"The open review format gives an excellent opportunity to assign proper credit for these experiments and insights by citing the comment."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 10,
"praise": 3,
"presentation_and_reporting": 4,
"results_and_discussion": 5,
"suggestion_and_solution": 3,
"total": 12
} | 0.166667 | 0.083333 | 0.083333 | 0.833333 | 0.25 | 0.333333 | 0.416667 | 0.25 | 12 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.416667 | 2.400884 | 0.015782 |
Energy-based Generative Adversarial Networks | We introduce the *Energy-based Generative Adversarial Network* model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images. | This paper introduces an energy-based Generative Adversarial Network (GAN) and provides theoretical and empirical results modeling a number of image datasets (including large-scale versions of categories of ImageNet). As far as I know energy-based GANs (EBGAN) were introduced in Kim and Bengio (2016), but the proposed version makes a number of different design choices. First, it does away with the entropy regularization term that Kim and Bengio (2016) introduced to ensure that the GAN discriminator converged to an energy function proportional to the log density of the data (at optimum). This implies that the discriminator in the proposed scheme will become uniform at convergence as discussed in the theoretical section of the paper, however the introductory text seems to imply otherwise -- that one could recover a meaningful score function from the trained energy-function (discriminator). This should be clarified. Second, this version of the EBGAN setting includes two innovations: (1) the introduction of the hinge loss in the value function, and (2) the use of an auto-encoder parametrization for the energy function. These innovations are not empirically justified in any way - this is disappointing, as it would be really good to see empirical results supporting the arguments made in support of their introduction. The two significant contributions of this paper are the theoretical analysis of the energy-baesd GAN formalism (showing that the optimum corresponds to a Nash equilibrium) and the impressive empirical results on large images that set a new standard in what straight GAN-style models can achieve. The theoretical results seem solid to me and make a nice contribution. Regarding the quantitative results in Table 2, it seems not appropriate to bold the EBGAN line when it seems to be statistically indistinguishable form the Rasmus et al (2015) results. Though it is not mentioned here, the use of bold typically indicates the state-of-the-art. I think this paper could be be much stronger if the two novel contributions to the energy-based GAN setting were more thoroughly explored with ablation experiments. That being said, I think this paper has already become a contribution other are building on (including at least two other ICLR submissions) and so I think it should be accepted for publication at ICLR. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnergy-based Generative Adversarial Networks\nWe introduce the *Energy-based Generative Adversarial Network* model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces an energy-based Generative Adversarial Network (GAN) and provides theoretical and empirical results modeling a number of image datasets (including large-scale versions of categories of ImageNet). As far as I know energy-based GANs (EBGAN) were introduced in Kim and Bengio (2016), but the proposed version makes a number of different design choices. First, it does away with the entropy regularization term that Kim and Bengio (2016) introduced to ensure that the GAN discriminator converged to an energy function proportional to the log density of the data (at optimum). This implies that the discriminator in the proposed scheme will become uniform at convergence as discussed in the theoretical section of the paper, however the introductory text seems to imply otherwise -- that one could recover a meaningful score function from the trained energy-function (discriminator). This should be clarified. Second, this version of the EBGAN setting includes two innovations: (1) the introduction of the hinge loss in the value function, and (2) the use of an auto-encoder parametrization for the energy function. These innovations are not empirically justified in any way - this is disappointing, as it would be really good to see empirical results supporting the arguments made in support of their introduction. The two significant contributions of this paper are the theoretical analysis of the energy-baesd GAN formalism (showing that the optimum corresponds to a Nash equilibrium) and the impressive empirical results on large images that set a new standard in what straight GAN-style models can achieve. The theoretical results seem solid to me and make a nice contribution. Regarding the quantitative results in Table 2, it seems not appropriate to bold the EBGAN line when it seems to be statistically indistinguishable form the Rasmus et al (2015) results. Though it is not mentioned here, the use of bold typically indicates the state-of-the-art. I think this paper could be be much stronger if the two novel contributions to the energy-based GAN setting were more thoroughly explored with ablation experiments. That being said, I think this paper has already become a contribution other are building on (including at least two other ICLR submissions) and so I think it should be accepted for publication at ICLR.",
"role": "assistant"
}
]
| [
"This paper introduces an energy-based Generative Adversarial Network (GAN) and provides theoretical and empirical results modeling a number of image datasets (including large-scale versions of categories of ImageNet).",
"As far as I know energy-based GANs (EBGAN) were introduced in Kim and Bengio (2016), but the proposed version makes a number of different design choices.",
"First, it does away with the entropy regularization term that Kim and Bengio (2016) introduced to ensure that the GAN discriminator converged to an energy function proportional to the log density of the data (at optimum).",
"This implies that the discriminator in the proposed scheme will become uniform at convergence as discussed in the theoretical section of the paper, however the introductory text seems to imply otherwise -- that one could recover a meaningful score function from the trained energy-function (discriminator).",
"This should be clarified.",
"Second, this version of the EBGAN setting includes two innovations: (1) the introduction of the hinge loss in the value function, and (2) the use of an auto-encoder parametrization for the energy function.",
"These innovations are not empirically justified in any way - this is disappointing, as it would be really good to see empirical results supporting the arguments made in support of their introduction.",
"The two significant contributions of this paper are the theoretical analysis of the energy-baesd GAN formalism (showing that the optimum corresponds to a Nash equilibrium) and the impressive empirical results on large images that set a new standard in what straight GAN-style models can achieve.",
"The theoretical results seem solid to me and make a nice contribution.",
"Regarding the quantitative results in Table 2, it seems not appropriate to bold the EBGAN line when it seems to be statistically indistinguishable form the Rasmus et al (2015) results.",
"Though it is not mentioned here, the use of bold typically indicates the state-of-the-art.",
"I think this paper could be be much stronger if the two novel contributions to the energy-based GAN setting were more thoroughly explored with ablation experiments.",
"That being said, I think this paper has already become a contribution other are building on (including at least two other ICLR submissions) and so I think it should be accepted for publication at ICLR."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 4,
"results_and_discussion": 7,
"suggestion_and_solution": 4,
"total": 13
} | 0.153846 | 0.076923 | 0.307692 | 0.461538 | 0.230769 | 0.307692 | 0.538462 | 0.307692 | 13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.384615 | 2.384615 | 0 |
Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights | This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two), our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embedded devices. The code will be made publicly available. | There is a great deal of ongoing interest in compressing neural network models. One line of work has focused on using low-precision representations of the model weights, even down to 1 or 2 bits. However, so far these approaches have been accompanied by a significant impact on accuracy. The paper proposes an iterative quantization scheme, in which the network weights are quantized in stages---the largest weights (in absolute value) are quantized and fixed, while unquantized weights can adapt to compensate for any resulting error. The experimental results show this is extremely effective, yielding models with 4 bit or 3 bit weights with essentially no reduction in accuracy. While at 2 bits the accuracy decreases slightly, the results are substantially better than those achieved with other quantization approaches. Overall this paper is clear, the technique is as far as I am aware novel, the experiments are thorough and the results are very compelling, so I recommend acceptance. The paper could use another second pass for writing style and grammar. Also, the description of the pruning-inspired partitioning strategy could be clarified somewhat... e.g., the chosen splitting ratio of 50% only seems to be referenced in a figure caption and not the main text. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIncremental Network Quantization: Towards Lossless CNNs with Low-precision Weights\nThis paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two), our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embedded devices. The code will be made publicly available.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThere is a great deal of ongoing interest in compressing neural network models. One line of work has focused on using low-precision representations of the model weights, even down to 1 or 2 bits. However, so far these approaches have been accompanied by a significant impact on accuracy. The paper proposes an iterative quantization scheme, in which the network weights are quantized in stages---the largest weights (in absolute value) are quantized and fixed, while unquantized weights can adapt to compensate for any resulting error. The experimental results show this is extremely effective, yielding models with 4 bit or 3 bit weights with essentially no reduction in accuracy. While at 2 bits the accuracy decreases slightly, the results are substantially better than those achieved with other quantization approaches. Overall this paper is clear, the technique is as far as I am aware novel, the experiments are thorough and the results are very compelling, so I recommend acceptance. The paper could use another second pass for writing style and grammar. Also, the description of the pruning-inspired partitioning strategy could be clarified somewhat... e.g., the chosen splitting ratio of 50% only seems to be referenced in a figure caption and not the main text.",
"role": "assistant"
}
]
| [
"There is a great deal of ongoing interest in compressing neural network models.",
"One line of work has focused on using low-precision representations of the model weights, even down to 1 or 2 bits.",
"However, so far these approaches have been accompanied by a significant impact on accuracy.",
"The paper proposes an iterative quantization scheme, in which the network weights are quantized in stages---the largest weights (in absolute value) are quantized and fixed, while unquantized weights can adapt to compensate for any resulting error.",
"The experimental results show this is extremely effective, yielding models with 4 bit or 3 bit weights with essentially no reduction in accuracy.",
"While at 2 bits the accuracy decreases slightly, the results are substantially better than those achieved with other quantization approaches.",
"Overall this paper is clear, the technique is as far as I am aware novel, the experiments are thorough and the results are very compelling, so I recommend acceptance.",
"The paper could use another second pass for writing style and grammar.",
"Also, the description of the pruning-inspired partitioning strategy could be clarified somewhat...",
"e.g., the chosen splitting ratio of 50% only seems to be referenced in a figure caption and not the main text."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 10
} | 0 | 0 | 0.2 | 0.9 | 0.3 | 0.3 | 0.4 | 0.3 | 10 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.4 | 2.257958 | 0.142042 |
Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights | This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two), our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embedded devices. The code will be made publicly available. | The idea of this paper is reasonable - gradually go from original weights to compressed weights by compressing a part of them and fine-tuning the rest. Everything seems fine, results look good, and my questions have been addressed. To improve the paper: 1) It would be good to incorporate some of the answers into the paper, mainly the results with pruning + this method as that can be compared fairly to Han et al. and outperforms it. 2) It would be good to better explain the encoding method (my question 4) as it is not that clear from the paper (e.g. made me make a mistake in question 5 for the computation of n2). The *5 bits* is misleading as in fact what is used is variable length encoding (which is on average close to 5 bits) where: - 0 is represented with 1 bit, e.g. 0 - other values are represented with 5 bits, where the first bit is needed to distinguish from 0, and the remaining 4 bits represent the 16 different values for the powers of 2. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIncremental Network Quantization: Towards Lossless CNNs with Low-precision Weights\nThis paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two), our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embedded devices. The code will be made publicly available.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe idea of this paper is reasonable - gradually go from original weights to compressed weights by compressing a part of them and fine-tuning the rest. Everything seems fine, results look good, and my questions have been addressed. To improve the paper: 1) It would be good to incorporate some of the answers into the paper, mainly the results with pruning + this method as that can be compared fairly to Han et al. and outperforms it. 2) It would be good to better explain the encoding method (my question 4) as it is not that clear from the paper (e.g. made me make a mistake in question 5 for the computation of n2). The *5 bits* is misleading as in fact what is used is variable length encoding (which is on average close to 5 bits) where: - 0 is represented with 1 bit, e.g. 0 - other values are represented with 5 bits, where the first bit is needed to distinguish from 0, and the remaining 4 bits represent the 16 different values for the powers of 2.",
"role": "assistant"
}
]
| [
"The idea of this paper is reasonable - gradually go from original weights to compressed weights by compressing a part of them and fine-tuning the rest.",
"Everything seems fine, results look good, and my questions have been addressed.",
"To improve the paper: 1) It would be good to incorporate some of the answers into the paper, mainly the results with pruning + this method as that can be compared fairly to Han et al.",
"and outperforms it.",
"2) It would be good to better explain the encoding method (my question 4) as it is not that clear from the paper (e.g.",
"made me make a mistake in question 5 for the computation of n2).",
"The *5 bits* is misleading as in fact what is used is variable length encoding (which is on average close to 5 bits) where: - 0 is represented with 1 bit, e.g.",
"0 - other values are represented with 5 bits, where the first bit is needed to distinguish from 0, and the remaining 4 bits represent the 16 different values for the powers of 2."
] | {
"criticism": 3,
"example": 2,
"importance_and_relevance": 0,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 8
} | 0.375 | 0.25 | 0 | 0.625 | 0.25 | 0.375 | 0.25 | 0.375 | 8 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.5 | 2.105439 | 0.394561 |
Deep Multi-task Representation Learning: A Tensor Factorisation Approach | Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices. | The paper proposed a nice framework leveraging Tucker and Tensor train low-rank tensor factorization to induce parameter sharing for multi-task learning. The framework is nice and appealing. However, MTL is a very well studied problem and the paper considers simple task for different classification, and it is not clear if we really need ``Deep Learning* for these simple datasets. A comparison with existing shallow MTL is necessary to show the benefits of the proposed methods (and in particular being deep) on the dataset. The authors ignore them on the basis of speculation and it is not clear if the proposed framework is really superior to simple regularizations like the nuclear norm. The idea of nuclear norm regularization can also be extended to deep learning as gradient descent are popular in all methods. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Multi-task Representation Learning: A Tensor Factorisation Approach\nMost contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposed a nice framework leveraging Tucker and Tensor train low-rank tensor factorization to induce parameter sharing for multi-task learning. The framework is nice and appealing. However, MTL is a very well studied problem and the paper considers simple task for different classification, and it is not clear if we really need ``Deep Learning* for these simple datasets. A comparison with existing shallow MTL is necessary to show the benefits of the proposed methods (and in particular being deep) on the dataset. The authors ignore them on the basis of speculation and it is not clear if the proposed framework is really superior to simple regularizations like the nuclear norm. The idea of nuclear norm regularization can also be extended to deep learning as gradient descent are popular in all methods.",
"role": "assistant"
}
]
| [
"The paper proposed a nice framework leveraging Tucker and Tensor train low-rank tensor factorization to induce parameter sharing for multi-task learning.",
"The framework is nice and appealing.",
"However, MTL is a very well studied problem and the paper considers simple task for different classification, and it is not clear if we really need ``Deep Learning* for these simple datasets.",
"A comparison with existing shallow MTL is necessary to show the benefits of the proposed methods (and in particular being deep) on the dataset.",
"The authors ignore them on the basis of speculation and it is not clear if the proposed framework is really superior to simple regularizations like the nuclear norm.",
"The idea of nuclear norm regularization can also be extended to deep learning as gradient descent are popular in all methods."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 6
} | 0.333333 | 0 | 0.333333 | 0.833333 | 0.333333 | 0 | 0.166667 | 0.333333 | 6 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.333333 | 1.559994 | 0.773339 |
Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method. | The paper looks at several innovations for deep RL, and evaluates their effect on solving games in the Atari domain. The paper reads a bit like a laundry list of the researcher’s latest tricks. It is written clearly enough, but lacks a compelling message. I expect the work will be interesting to people already implementing deep RL methods, but will probably not get much attention from the broader community. The claims on p.1 suggest the approach is stable and sample efficience, and so I expected to see some theoretical analysis with respect to these properties. But this is an empirical claim; it would help to clarify that in the abstract. The proposed innovations are based on sound methods. It is particularly nice to see the same approach working for both discrete and continuous domains. The paper has reasonably complete empirical results. It would be nice to see confidence intervals on more of the plots. Also, the results don’t really tease apart the effect of each of the various innovations, so it’s harder to understand the impact of each piece and to really get intuition, for example about why ACER outperforms A3C. Also, it wasn’t clear to me why you only get matching results on discrete tasks, but get state-of-the-art on continuous tasks. The paper has good coverage of the related literature. It is nice to see this work draw more attention to Retrace, including the theoretical characterization in Sec.7. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSample Efficient Actor-Critic with Experience Replay\nThis paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper looks at several innovations for deep RL, and evaluates their effect on solving games in the Atari domain. The paper reads a bit like a laundry list of the researcher’s latest tricks. It is written clearly enough, but lacks a compelling message. I expect the work will be interesting to people already implementing deep RL methods, but will probably not get much attention from the broader community. The claims on p.1 suggest the approach is stable and sample efficience, and so I expected to see some theoretical analysis with respect to these properties. But this is an empirical claim; it would help to clarify that in the abstract. The proposed innovations are based on sound methods. It is particularly nice to see the same approach working for both discrete and continuous domains. The paper has reasonably complete empirical results. It would be nice to see confidence intervals on more of the plots. Also, the results don’t really tease apart the effect of each of the various innovations, so it’s harder to understand the impact of each piece and to really get intuition, for example about why ACER outperforms A3C. Also, it wasn’t clear to me why you only get matching results on discrete tasks, but get state-of-the-art on continuous tasks. The paper has good coverage of the related literature. It is nice to see this work draw more attention to Retrace, including the theoretical characterization in Sec.7.",
"role": "assistant"
}
]
| [
"The paper looks at several innovations for deep RL, and evaluates their effect on solving games in the Atari domain.",
"The paper reads a bit like a laundry list of the researcher’s latest tricks.",
"It is written clearly enough, but lacks a compelling message.",
"I expect the work will be interesting to people already implementing deep RL methods, but will probably not get much attention from the broader community.",
"The claims on p.1 suggest the approach is stable and sample efficience, and so I expected to see some theoretical analysis with respect to these properties.",
"But this is an empirical claim; it would help to clarify that in the abstract.",
"The proposed innovations are based on sound methods.",
"It is particularly nice to see the same approach working for both discrete and continuous domains.",
"The paper has reasonably complete empirical results.",
"It would be nice to see confidence intervals on more of the plots.",
"Also, the results don’t really tease apart the effect of each of the various innovations, so it’s harder to understand the impact of each piece and to really get intuition, for example about why ACER outperforms A3C.",
"Also, it wasn’t clear to me why you only get matching results on discrete tasks, but get state-of-the-art on continuous tasks.",
"The paper has good coverage of the related literature.",
"It is nice to see this work draw more attention to Retrace, including the theoretical characterization in Sec.7."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 8,
"praise": 6,
"presentation_and_reporting": 2,
"results_and_discussion": 5,
"suggestion_and_solution": 3,
"total": 14
} | 0.285714 | 0 | 0.214286 | 0.571429 | 0.428571 | 0.142857 | 0.357143 | 0.214286 | 14 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.214286 | 2.198503 | 0.015782 |
Temporal Ensembling for Semi-Supervised Learning | In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels. | This work explores taking advantage of the stochasticity of neural network outputs under randomized augmentation and regularization techniques to provide targets for unlabeled data in a semi-supervised setting. This is accomplished by either applying stochastic augmentation and regularization on a single image multiple times per epoch and encouraging the outputs to be similar (?-model) or by keeping a weighted average of past epoch outputs and penalizing deviations of current network outputs from this running mean (temporal ensembling). The core argument is that these approaches produce ensemble predictions which are likely more accurate than the current network and are thus good targets for unlabeled data. Both approaches seem to work quite well on semi-supervised tasks and some results show that they are almost unbelievably robust to label noise. The paper is clearly written and provides sufficient details to reproduce these results in addition to providing a public code base. The core idea of the paper is quite interesting and seems to result in higher semi-supervised accuracy than prior work. I also found the attention to and discussion of the effect of different choices of data augmentation to be useful. I am a little surprised that a standard supervised network can achieve 30% accuracy on SVHN given 90% random training labels. This would only give 19% correctly labeled data (9% by chance + 10% unaltered). I suppose the other 81% would not provide a consistent training signal such that it is possible, but it does seem quite unintuitive. I tried to look through the github for this experiment but it does not seem to be included. As for the resistance of ?-model and temporal ensembling to this label noise, I find that somewhat more believable given the large weights placed on the consistency constraint for this task. The authors should really include discussion of w(t) in the main paper. Especially because the tremendous difference in w_max in the incorrect label tolerance experiment (10x for ?-model and 100x for temporal ensembling from the standard setting). Could the authors comment towards the scalability for larger problems? For ImageNet, you would need to store around 4.8 gigs for the temporal ensembling method or spend 2x as long training with ?-model. Can the authors discuss sensitivity of this approach to the amount and location of dropout layers in the architecture? Preliminary rating: I think this is a very interesting paper with quality results and clear presentation. Minor note: 2nd paragraph of page one *without neither* -> *without either* | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTemporal Ensembling for Semi-Supervised Learning\nIn this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work explores taking advantage of the stochasticity of neural network outputs under randomized augmentation and regularization techniques to provide targets for unlabeled data in a semi-supervised setting. This is accomplished by either applying stochastic augmentation and regularization on a single image multiple times per epoch and encouraging the outputs to be similar (?-model) or by keeping a weighted average of past epoch outputs and penalizing deviations of current network outputs from this running mean (temporal ensembling). The core argument is that these approaches produce ensemble predictions which are likely more accurate than the current network and are thus good targets for unlabeled data. Both approaches seem to work quite well on semi-supervised tasks and some results show that they are almost unbelievably robust to label noise. The paper is clearly written and provides sufficient details to reproduce these results in addition to providing a public code base. The core idea of the paper is quite interesting and seems to result in higher semi-supervised accuracy than prior work. I also found the attention to and discussion of the effect of different choices of data augmentation to be useful. I am a little surprised that a standard supervised network can achieve 30% accuracy on SVHN given 90% random training labels. This would only give 19% correctly labeled data (9% by chance + 10% unaltered). I suppose the other 81% would not provide a consistent training signal such that it is possible, but it does seem quite unintuitive. I tried to look through the github for this experiment but it does not seem to be included. As for the resistance of ?-model and temporal ensembling to this label noise, I find that somewhat more believable given the large weights placed on the consistency constraint for this task. The authors should really include discussion of w(t) in the main paper. Especially because the tremendous difference in w_max in the incorrect label tolerance experiment (10x for ?-model and 100x for temporal ensembling from the standard setting). Could the authors comment towards the scalability for larger problems? For ImageNet, you would need to store around 4.8 gigs for the temporal ensembling method or spend 2x as long training with ?-model. Can the authors discuss sensitivity of this approach to the amount and location of dropout layers in the architecture? Preliminary rating: I think this is a very interesting paper with quality results and clear presentation. Minor note: 2nd paragraph of page one *without neither* -> *without either*",
"role": "assistant"
}
]
| [
"This work explores taking advantage of the stochasticity of neural network outputs under randomized augmentation and regularization techniques to provide targets for unlabeled data in a semi-supervised setting.",
"This is accomplished by either applying stochastic augmentation and regularization on a single image multiple times per epoch and encouraging the outputs to be similar (?-model) or by keeping a weighted average of past epoch outputs and penalizing deviations of current network outputs from this running mean (temporal ensembling).",
"The core argument is that these approaches produce ensemble predictions which are likely more accurate than the current network and are thus good targets for unlabeled data.",
"Both approaches seem to work quite well on semi-supervised tasks and some results show that they are almost unbelievably robust to label noise.",
"The paper is clearly written and provides sufficient details to reproduce these results in addition to providing a public code base.",
"The core idea of the paper is quite interesting and seems to result in higher semi-supervised accuracy than prior work.",
"I also found the attention to and discussion of the effect of different choices of data augmentation to be useful.",
"I am a little surprised that a standard supervised network can achieve 30% accuracy on SVHN given 90% random training labels.",
"This would only give 19% correctly labeled data (9% by chance + 10% unaltered).",
"I suppose the other 81% would not provide a consistent training signal such that it is possible, but it does seem quite unintuitive.",
"I tried to look through the github for this experiment but it does not seem to be included.",
"As for the resistance of ?-model and temporal ensembling to this label noise, I find that somewhat more believable given the large weights placed on the consistency constraint for this task.",
"The authors should really include discussion of w(t) in the main paper.",
"Especially because the tremendous difference in w_max in the incorrect label tolerance experiment (10x for ?-model and 100x for temporal ensembling from the standard setting).",
"Could the authors comment towards the scalability for larger problems?",
"For ImageNet, you would need to store around 4.8 gigs for the temporal ensembling method or spend 2x as long training with ?-model.",
"Can the authors discuss sensitivity of this approach to the amount and location of dropout layers in the architecture?",
"Preliminary rating: I think this is a very interesting paper with quality results and clear presentation.",
"Minor note: 2nd paragraph of page one *without neither* -> *without either*"
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 14,
"praise": 6,
"presentation_and_reporting": 6,
"results_and_discussion": 9,
"suggestion_and_solution": 3,
"total": 19
} | 0.210526 | 0.052632 | 0.105263 | 0.736842 | 0.315789 | 0.315789 | 0.473684 | 0.157895 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.368421 | 1.800254 | 0.568167 |
On Detecting Adversarial Perturbations | Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small ``detector** subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Moreover, while the detectors have been trained to detect only a specific adversary, they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack. | This paper explores an important angle to adversarial examples: the detection of adversarial images and their utilization for trainig more robust networks. This takes the competition between adversaries and models to a new level. The paper presents appealing evidence for the feasibility of robustifying networks by employing the a detector subnetwork that is trained particularly for the purpose of detecting the adversaries in a terget manner rather than just making the networks themselves robust to adversarial examples. The jointly trained primary/detector system is evaluated in various scenarios including the cases when the adversary generator has access to the model and those where they are generated in a generic way. The results of the paper show good improvements with the approach and present well motived thorough analyses to back the main message. The writing is clear and concise. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn Detecting Adversarial Perturbations\nMachine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small ``detector** subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Moreover, while the detectors have been trained to detect only a specific adversary, they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper explores an important angle to adversarial examples: the detection of adversarial images and their utilization for trainig more robust networks. This takes the competition between adversaries and models to a new level. The paper presents appealing evidence for the feasibility of robustifying networks by employing the a detector subnetwork that is trained particularly for the purpose of detecting the adversaries in a terget manner rather than just making the networks themselves robust to adversarial examples. The jointly trained primary/detector system is evaluated in various scenarios including the cases when the adversary generator has access to the model and those where they are generated in a generic way. The results of the paper show good improvements with the approach and present well motived thorough analyses to back the main message. The writing is clear and concise.",
"role": "assistant"
}
]
| [
"This paper explores an important angle to adversarial examples: the detection of adversarial images and their utilization for trainig more robust networks.",
"This takes the competition between adversaries and models to a new level.",
"The paper presents appealing evidence for the feasibility of robustifying networks by employing the a detector subnetwork that is trained particularly for the purpose of detecting the adversaries in a terget manner rather than just making the networks themselves robust to adversarial examples.",
"The jointly trained primary/detector system is evaluated in various scenarios including the cases when the adversary generator has access to the model and those where they are generated in a generic way.",
"The results of the paper show good improvements with the approach and present well motived thorough analyses to back the main message.",
"The writing is clear and concise."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 6
} | 0 | 0 | 0.333333 | 0.833333 | 0.666667 | 0.166667 | 0.166667 | 0 | 6 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.166667 | 1.393328 | 0.773339 |
Training deep neural-networks using a noise adaptation layer | The availability of large datsets has enabled neural networks to achieve impressive recognition results. However, the presence of inaccurate class labels is known to deteriorate the performance of even the best classifiers in a broad range of classification problems. Noisy labels also tend to be more harmful than noisy attributes. When the observed label is noisy, we can view the correct label as a latent random variable and model the noise processes by a communication channel with unknown parameters. Thus we can apply the EM algorithm to find the parameters of both the network and the noise and to estimate the correct label. In this study we present a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm. The noise is explicitly modeled by an additional softmax layer that connects the correct labels to the noisy ones. This scheme is then extended to the case where the noisy labels are dependent on the features in addition to the correct labels. Experimental results demonstrate that this approach outperforms previous methods. | The paper addressed the erroneous label problem for supervised training. The problem is well formulated and the presented solution is novel. The experimental justification is limited. The effectiveness of the proposed method is hard to gauge, especially how to scale the proposed method to large number of classification targets and whether it is still effective. For example, it would be interesting to see whether the proposed method is better than training with only less but high quality data. From Figure 2, it seems with more data, the proposed method tends to behave very well when the noise fraction is below a threshold and dramatically degrades once passing that threshold. Analysis and justification of this behavior whether it is just by chance or an expected one of the method would be very useful. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining deep neural-networks using a noise adaptation layer\nThe availability of large datsets has enabled neural networks to achieve impressive recognition results. However, the presence of inaccurate class labels is known to deteriorate the performance of even the best classifiers in a broad range of classification problems. Noisy labels also tend to be more harmful than noisy attributes. When the observed label is noisy, we can view the correct label as a latent random variable and model the noise processes by a communication channel with unknown parameters. Thus we can apply the EM algorithm to find the parameters of both the network and the noise and to estimate the correct label. In this study we present a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm. The noise is explicitly modeled by an additional softmax layer that connects the correct labels to the noisy ones. This scheme is then extended to the case where the noisy labels are dependent on the features in addition to the correct labels. Experimental results demonstrate that this approach outperforms previous methods.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper addressed the erroneous label problem for supervised training. The problem is well formulated and the presented solution is novel. The experimental justification is limited. The effectiveness of the proposed method is hard to gauge, especially how to scale the proposed method to large number of classification targets and whether it is still effective. For example, it would be interesting to see whether the proposed method is better than training with only less but high quality data. From Figure 2, it seems with more data, the proposed method tends to behave very well when the noise fraction is below a threshold and dramatically degrades once passing that threshold. Analysis and justification of this behavior whether it is just by chance or an expected one of the method would be very useful.",
"role": "assistant"
}
]
| [
"The paper addressed the erroneous label problem for supervised training.",
"The problem is well formulated and the presented solution is novel.",
"The experimental justification is limited.",
"The effectiveness of the proposed method is hard to gauge, especially how to scale the proposed method to large number of classification targets and whether it is still effective.",
"For example, it would be interesting to see whether the proposed method is better than training with only less but high quality data.",
"From Figure 2, it seems with more data, the proposed method tends to behave very well when the noise fraction is below a threshold and dramatically degrades once passing that threshold.",
"Analysis and justification of this behavior whether it is just by chance or an expected one of the method would be very useful."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 7
} | 0.285714 | 0.142857 | 0.142857 | 0.714286 | 0.142857 | 0.142857 | 0.285714 | 0.285714 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.