Dataset Viewer
title
stringlengths 2
153
| abstract
stringlengths 0
2.15k
| review
stringlengths 97
8.43k
| source
stringclasses 4
values | review_src
stringclasses 1
value | thinking_trace
stringclasses 1
value | conversations
listlengths 2
2
| review_sentences
sequencelengths 1
61
| criteria_count
dict | criticism
float64 0
1
| example
float64 0
1
| importance_and_relevance
float64 0
1
| materials_and_methods
float64 0
1
| praise
float64 0
1
| presentation_and_reporting
float64 0
1
| results_and_discussion
float64 0
1
| suggestion_and_solution
float64 0
1
| total
int64 1
61
| has_criticism
int64 0
1
| has_example
int64 0
1
| has_importance_and_relevance
int64 0
1
| has_materials_and_methods
int64 0
1
| has_praise
int64 0
1
| has_presentation_and_reporting
int64 0
1
| has_results_and_discussion
int64 0
1
| has_suggestion_and_solution
int64 0
1
| reward_value
float64 2.13
6
| reward_value_length_adjusted
float64 -34.13
3.73
| length_penalty
float64 0
36.4
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Making Neural Programming Architectures Generalize via Recursion | Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system’s behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion. | This paper improves significantly upon the original NPI work, showing that the model generalizes far better when trained on traces in recursive form. The authors show better sample complexity and generalization results for addition and bubblesort programs, and add two new and more interesting tasks - topological sort and quicksort (added based on reviewer discussion). Furthermore, they actually *prove* that the algorithms learned by the model generalize perfectly, which to my knowledge is the first time this has been done in neural program induction. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMaking Neural Programming Architectures Generalize via Recursion\nEmpirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system’s behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper improves significantly upon the original NPI work, showing that the model generalizes far better when trained on traces in recursive form. The authors show better sample complexity and generalization results for addition and bubblesort programs, and add two new and more interesting tasks - topological sort and quicksort (added based on reviewer discussion). Furthermore, they actually *prove* that the algorithms learned by the model generalize perfectly, which to my knowledge is the first time this has been done in neural program induction.",
"role": "assistant"
}
]
| [
"This paper improves significantly upon the original NPI work, showing that the model generalizes far better when trained on traces in recursive form.",
"The authors show better sample complexity and generalization results for addition and bubblesort programs, and add two new and more interesting tasks - topological sort and quicksort (added based on reviewer discussion).",
"Furthermore, they actually *prove* that the algorithms learned by the model generalize perfectly, which to my knowledge is the first time this has been done in neural program induction."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0 | 0 | 0.333333 | 1 | 0.666667 | 0 | 0.333333 | 0.333333 | 3 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.666667 | 1.088424 | 1.578243 |
End-to-end Optimized Image Compression | We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM. | This is the most convincing paper on image compression with deep neural networks that I have read so far. The paper is very well written, the use of the rate-distortion theory in the objective fits smoothly in the framework. The paper is compared to a reasonable baseline (JPEG2000, as opposed to previous papers only considering JPEG). I would expect this paper to have a very good impact. Yes, please include results on Lena/Barbara/Baboon (sorry, not Gibbons), along with state-of-the-art references with more classical methods such as the one I mentioned in my questions. I think it is important to clearly state how NN compare to best previous methods. From the submitted version, I still don*t know how both categories of methods are positioned. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnd-to-end Optimized Image Compression\nWe describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is the most convincing paper on image compression with deep neural networks that I have read so far. The paper is very well written, the use of the rate-distortion theory in the objective fits smoothly in the framework. The paper is compared to a reasonable baseline (JPEG2000, as opposed to previous papers only considering JPEG). I would expect this paper to have a very good impact. Yes, please include results on Lena/Barbara/Baboon (sorry, not Gibbons), along with state-of-the-art references with more classical methods such as the one I mentioned in my questions. I think it is important to clearly state how NN compare to best previous methods. From the submitted version, I still don*t know how both categories of methods are positioned.",
"role": "assistant"
}
]
| [
"This is the most convincing paper on image compression with deep neural networks that I have read so far.",
"The paper is very well written, the use of the rate-distortion theory in the objective fits smoothly in the framework.",
"The paper is compared to a reasonable baseline (JPEG2000, as opposed to previous papers only considering JPEG).",
"I would expect this paper to have a very good impact.",
"Yes, please include results on Lena/Barbara/Baboon (sorry, not Gibbons), along with state-of-the-art references with more classical methods such as the one I mentioned in my questions.",
"I think it is important to clearly state how NN compare to best previous methods.",
"From the submitted version, I still don*t know how both categories of methods are positioned."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 7
} | 0.142857 | 0 | 0.142857 | 0.714286 | 0.285714 | 0.428571 | 0.142857 | 0.285714 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
End-to-end Optimized Image Compression | We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM. | This nicely written paper presents an end-to-end learning method for image compression. By optimizing for rate-distortion performance and a clever relaxation the method is able to learn an efficient image compression method by optimizing over a database of natural images. As the method is interesting, results are interesting and analysis is quite thorough it*s easy for me to recommend acceptance. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnd-to-end Optimized Image Compression\nWe describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis nicely written paper presents an end-to-end learning method for image compression. By optimizing for rate-distortion performance and a clever relaxation the method is able to learn an efficient image compression method by optimizing over a database of natural images. As the method is interesting, results are interesting and analysis is quite thorough it*s easy for me to recommend acceptance.",
"role": "assistant"
}
]
| [
"This nicely written paper presents an end-to-end learning method for image compression.",
"By optimizing for rate-distortion performance and a clever relaxation the method is able to learn an efficient image compression method by optimizing over a database of natural images.",
"As the method is interesting, results are interesting and analysis is quite thorough it*s easy for me to recommend acceptance."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 3
} | 0 | 0 | 0.333333 | 1 | 1 | 0.333333 | 0.333333 | 0 | 3 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 3 | 1.421757 | 1.578243 |
Optimization as a Model for Few-Shot Learning | Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning. | This paper describes a new approach to meta learning by interpreting the SGD update rule as gated recurrent model with trainable parameters. The idea is original and important for research related to transfer learning. The paper has a clear structure, but clarity could be improved at some points. Pros: - An interesting and feasible approach to meta-learning - Competitive results and proper comparison to state-of-the-art - Good recommendations for practical systems Cons: - The analogy would be closer to GRUs than LSTMs - The description of the data separation in meta sets is hard to follow and could be visualized - The experimental evaluation is only partly satisfying, especially the effect of the parameters of i_t and f_t would be of interest - Fig 2 doesn*t have much value Remarks: - Small typo in 3.2: *This means each coordinate has it* -> its > We plan on releasing the code used in our evaluation experiments. This would certainly be a major plus. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOptimization as a Model for Few-Shot Learning\nThough deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper describes a new approach to meta learning by interpreting the SGD update rule as gated recurrent model with trainable parameters. The idea is original and important for research related to transfer learning. The paper has a clear structure, but clarity could be improved at some points. Pros: - An interesting and feasible approach to meta-learning - Competitive results and proper comparison to state-of-the-art - Good recommendations for practical systems Cons: - The analogy would be closer to GRUs than LSTMs - The description of the data separation in meta sets is hard to follow and could be visualized - The experimental evaluation is only partly satisfying, especially the effect of the parameters of i_t and f_t would be of interest - Fig 2 doesn*t have much value Remarks: - Small typo in 3.2: *This means each coordinate has it* -> its > We plan on releasing the code used in our evaluation experiments. This would certainly be a major plus.",
"role": "assistant"
}
]
| [
"This paper describes a new approach to meta learning by interpreting the SGD update rule as gated recurrent model with trainable parameters.",
"The idea is original and important for research related to transfer learning.",
"The paper has a clear structure, but clarity could be improved at some points.",
"Pros: - An interesting and feasible approach to meta-learning - Competitive results and proper comparison to state-of-the-art - Good recommendations for practical systems Cons: - The analogy would be closer to GRUs than LSTMs - The description of the data separation in meta sets is hard to follow and could be visualized - The experimental evaluation is only partly satisfying, especially the effect of the parameters of i_t and f_t would be of interest - Fig 2 doesn*t have much value Remarks: - Small typo in 3.2: *This means each coordinate has it* -> its > We plan on releasing the code used in our evaluation experiments.",
"This would certainly be a major plus."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 2,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 5
} | 0.2 | 0.2 | 0.6 | 0.4 | 0.6 | 0.4 | 0.2 | 0.2 | 5 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.8 | 1.789924 | 1.010076 |
Multi-Agent Cooperation and the Emergence of (Natural) Language | The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in- terested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communi- cation. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message to the receiver, while the receiver must rely on it to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore whether the “word meanings” induced in the game reflect intuitive semantic properties of the objects depicted in the image, and we present a simple strategy for grounding the agents’ code into natural language, a necessary step in developing machines that should eventually be able to communicate with humans. | To train natural language systems by putting multiple agents within an interactive referential communication game is very nice. As the authors mention, there has been some (although seemingly not much) previous work on using multi-agent games to teach communication, and it certainly seems like a direction worth pursuing. Moreover, the approach of switching between these games and some supervised learning, as in the experiment described in Section 5 and suggested in Section 6, seems particularly fruitful. Note: For “clarity”, I believe some of the network connections in Fig 1 have been omitted. However, given the rather highly-customized architecture and the slightly hard-to-follow description in Section 3, the shorthand diagram only adds to the confusion. The diagram probably needs to be fine-tuned, but at the very least (especially if I am misunderstanding it!), a caption must [still] be added to help the reader interpret the figure. Overall, the framework (Section 2) is great and seems quite effective/useful in various ways, the results are reasonable, and I expect there will be some interesting future variations on this work as well. Caveat: While I am quite confident I understood the paper (as per confidence score below), I do not feel I am sufficiently familiar with the most closely related literature to accurately assess the place of this work within that context. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMulti-Agent Cooperation and the Emergence of (Natural) Language\nThe current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in- terested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communi- cation. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message to the receiver, while the receiver must rely on it to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore whether the “word meanings” induced in the game reflect intuitive semantic properties of the objects depicted in the image, and we present a simple strategy for grounding the agents’ code into natural language, a necessary step in developing machines that should eventually be able to communicate with humans.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nTo train natural language systems by putting multiple agents within an interactive referential communication game is very nice. As the authors mention, there has been some (although seemingly not much) previous work on using multi-agent games to teach communication, and it certainly seems like a direction worth pursuing. Moreover, the approach of switching between these games and some supervised learning, as in the experiment described in Section 5 and suggested in Section 6, seems particularly fruitful. Note: For “clarity”, I believe some of the network connections in Fig 1 have been omitted. However, given the rather highly-customized architecture and the slightly hard-to-follow description in Section 3, the shorthand diagram only adds to the confusion. The diagram probably needs to be fine-tuned, but at the very least (especially if I am misunderstanding it!), a caption must [still] be added to help the reader interpret the figure. Overall, the framework (Section 2) is great and seems quite effective/useful in various ways, the results are reasonable, and I expect there will be some interesting future variations on this work as well. Caveat: While I am quite confident I understood the paper (as per confidence score below), I do not feel I am sufficiently familiar with the most closely related literature to accurately assess the place of this work within that context.",
"role": "assistant"
}
]
| [
"To train natural language systems by putting multiple agents within an interactive referential communication game is very nice.",
"As the authors mention, there has been some (although seemingly not much) previous work on using multi-agent games to teach communication, and it certainly seems like a direction worth pursuing.",
"Moreover, the approach of switching between these games and some supervised learning, as in the experiment described in Section 5 and suggested in Section 6, seems particularly fruitful.",
"Note: For “clarity”, I believe some of the network connections in Fig 1 have been omitted.",
"However, given the rather highly-customized architecture and the slightly hard-to-follow description in Section 3, the shorthand diagram only adds to the confusion.",
"The diagram probably needs to be fine-tuned, but at the very least (especially if I am misunderstanding it!",
"), a caption must [still] be added to help the reader interpret the figure.",
"Overall, the framework (Section 2) is great and seems quite effective/useful in various ways, the results are reasonable, and I expect there will be some interesting future variations on this work as well.",
"Caveat: While I am quite confident I understood the paper (as per confidence score below), I do not feel I am sufficiently familiar with the most closely related literature to accurately assess the place of this work within that context."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 2,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 9
} | 0.333333 | 0.111111 | 0.333333 | 0.222222 | 0.333333 | 0.555556 | 0.111111 | 0.333333 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.080814 | 0.252519 |
Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models. | This paper presents a set of experiments where via clever use of randomization and noise addition authors demonstrate the enormous modeling (memorization) power of the deep neural networks with large enough capacity. Yet these same models have very good generalization behavior even when all obvious explicit or implicit regularizers are removed. These observations are used to argue that classical theory (VC dimension, Rademacher complexity, uniform stability) is not able to explain the generalization behavior of deep neural networks, necessitating novel theory. This is a very interesting and thought provoking body of work and I am in complete accord with the observations and conclusions of the paper. The classical generalization theory is indeed often at a loss with complex enough model families. As the authors point out, once model families reach a point where they have capacity to memorize train sets, the classical theory does not yield useful results that could give insight into generalization behavior of these models, leaving one to empirical studies and observations. A minor clarification comment: On page 2, “ … true labels were replaced by random labels.” Please state that random labels came from the same set as the true labels, to clarify the experiment. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding deep learning requires rethinking generalization\nDespite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a set of experiments where via clever use of randomization and noise addition authors demonstrate the enormous modeling (memorization) power of the deep neural networks with large enough capacity. Yet these same models have very good generalization behavior even when all obvious explicit or implicit regularizers are removed. These observations are used to argue that classical theory (VC dimension, Rademacher complexity, uniform stability) is not able to explain the generalization behavior of deep neural networks, necessitating novel theory. This is a very interesting and thought provoking body of work and I am in complete accord with the observations and conclusions of the paper. The classical generalization theory is indeed often at a loss with complex enough model families. As the authors point out, once model families reach a point where they have capacity to memorize train sets, the classical theory does not yield useful results that could give insight into generalization behavior of these models, leaving one to empirical studies and observations. A minor clarification comment: On page 2, “ … true labels were replaced by random labels.” Please state that random labels came from the same set as the true labels, to clarify the experiment.",
"role": "assistant"
}
]
| [
"This paper presents a set of experiments where via clever use of randomization and noise addition authors demonstrate the enormous modeling (memorization) power of the deep neural networks with large enough capacity.",
"Yet these same models have very good generalization behavior even when all obvious explicit or implicit regularizers are removed.",
"These observations are used to argue that classical theory (VC dimension, Rademacher complexity, uniform stability) is not able to explain the generalization behavior of deep neural networks, necessitating novel theory.",
"This is a very interesting and thought provoking body of work and I am in complete accord with the observations and conclusions of the paper.",
"The classical generalization theory is indeed often at a loss with complex enough model families.",
"As the authors point out, once model families reach a point where they have capacity to memorize train sets, the classical theory does not yield useful results that could give insight into generalization behavior of these models, leaving one to empirical studies and observations.",
"A minor clarification comment: On page 2, “ … true labels were replaced by random labels.” Please state that random labels came from the same set as the true labels, to clarify the experiment."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 7
} | 0.142857 | 0.142857 | 0.428571 | 0.714286 | 0.428571 | 0.285714 | 0.571429 | 0.142857 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.857143 | 2.288975 | 0.568167 |
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym*s MuJoCo continuous control environments. | The paper proposes using a first-order Taylor expansion as a control variate in policy gradient-style methods. Empirical results in dynamical control tasks suggest that this algorithm reduces the sample complexity, while the theoretical results presented suggest the algorithm is unbiased but of lower variance. The use of control variates is very important and the present paper is an interesting approach in this direction. I am not fully convinced of the approach, because it is one of many possible, and the theoretical analysis relies on an approximation of the variance rather than exact calculations, which makes it less compelling. However, this paper is a step in the right direction so it is worth accepting. In the experiments, a few things need to be discussed further: - What is the running time of the proposed approach? The computation of the extra terms required looks like it could be expensive. running time comparison in addition to sample comparison should be included - The sensitivity to parameter settings of the proposed algorithm needs to be illustrated in separate graphs, since this is one of the main claims in the paper - It would be nice to have a toy example included in which one can actually compute exact values and plot learning curves to compare more directly bias and variance. It would especially be nice to do this with a task that includes rare states, which is the case in which variance of other methods (eg importance sampling) really becomes significant. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nQ-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic\nModel-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym*s MuJoCo continuous control environments.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes using a first-order Taylor expansion as a control variate in policy gradient-style methods. Empirical results in dynamical control tasks suggest that this algorithm reduces the sample complexity, while the theoretical results presented suggest the algorithm is unbiased but of lower variance. The use of control variates is very important and the present paper is an interesting approach in this direction. I am not fully convinced of the approach, because it is one of many possible, and the theoretical analysis relies on an approximation of the variance rather than exact calculations, which makes it less compelling. However, this paper is a step in the right direction so it is worth accepting. In the experiments, a few things need to be discussed further: - What is the running time of the proposed approach? The computation of the extra terms required looks like it could be expensive. running time comparison in addition to sample comparison should be included - The sensitivity to parameter settings of the proposed algorithm needs to be illustrated in separate graphs, since this is one of the main claims in the paper - It would be nice to have a toy example included in which one can actually compute exact values and plot learning curves to compare more directly bias and variance. It would especially be nice to do this with a task that includes rare states, which is the case in which variance of other methods (eg importance sampling) really becomes significant.",
"role": "assistant"
}
]
| [
"The paper proposes using a first-order Taylor expansion as a control variate in policy gradient-style methods.",
"Empirical results in dynamical control tasks suggest that this algorithm reduces the sample complexity, while the theoretical results presented suggest the algorithm is unbiased but of lower variance.",
"The use of control variates is very important and the present paper is an interesting approach in this direction.",
"I am not fully convinced of the approach, because it is one of many possible, and the theoretical analysis relies on an approximation of the variance rather than exact calculations, which makes it less compelling.",
"However, this paper is a step in the right direction so it is worth accepting.",
"In the experiments, a few things need to be discussed further: - What is the running time of the proposed approach?",
"The computation of the extra terms required looks like it could be expensive.",
"running time comparison in addition to sample comparison should be included - The sensitivity to parameter settings of the proposed algorithm needs to be illustrated in separate graphs, since this is one of the main claims in the paper - It would be nice to have a toy example included in which one can actually compute exact values and plot learning curves to compare more directly bias and variance.",
"It would especially be nice to do this with a task that includes rare states, which is the case in which variance of other methods (eg importance sampling) really becomes significant."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 8,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 5,
"suggestion_and_solution": 4,
"total": 9
} | 0.111111 | 0 | 0.111111 | 0.888889 | 0.111111 | 0.222222 | 0.555556 | 0.444444 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.444444 | 2.191926 | 0.252519 |
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say -- data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions---and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap. | Interesting paper, definitely provides value to the community by discussing why large batch gradient descent does not work too well | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima\nThe stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say -- data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions---and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nInteresting paper, definitely provides value to the community by discussing why large batch gradient descent does not work too well",
"role": "assistant"
}
]
| [
"Interesting paper, definitely provides value to the community by discussing why large batch gradient descent does not work too well"
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 1,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 1
} | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 3 | 0.72733 | 2.27267 |
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data | Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as **teachers** for a **student** model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student*s privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student*s training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning. | This paper addresses the problem of achieving differential privacy in a very general scenario where a set of teachers is trained on disjoint subsets of sensitive data and the student performs prediction based on public data labeled by teachers through noisy voting. I found the approach altogether plausible and very clearly explained by the authors. Adding more discussion of the bound (and its tightness) from Theorem 1 itself would be appreciated. A simple idea of adding perturbation error to the counts, known from differentially-private literature, is nicely re-used by the authors and elegantly applied in a much broader (non-convex setting) and practical context than in a number of differentially-private and other related papers. The generality of the approach, clear improvement over predecessors, and clarity of the writing makes the method worth publishing. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemi-supervised Knowledge Transfer for Deep Learning from Private Training Data\nSome machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as **teachers** for a **student** model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student*s privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student*s training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper addresses the problem of achieving differential privacy in a very general scenario where a set of teachers is trained on disjoint subsets of sensitive data and the student performs prediction based on public data labeled by teachers through noisy voting. I found the approach altogether plausible and very clearly explained by the authors. Adding more discussion of the bound (and its tightness) from Theorem 1 itself would be appreciated. A simple idea of adding perturbation error to the counts, known from differentially-private literature, is nicely re-used by the authors and elegantly applied in a much broader (non-convex setting) and practical context than in a number of differentially-private and other related papers. The generality of the approach, clear improvement over predecessors, and clarity of the writing makes the method worth publishing.",
"role": "assistant"
}
]
| [
"This paper addresses the problem of achieving differential privacy in a very general scenario where a set of teachers is trained on disjoint subsets of sensitive data and the student performs prediction based on public data labeled by teachers through noisy voting.",
"I found the approach altogether plausible and very clearly explained by the authors.",
"Adding more discussion of the bound (and its tightness) from Theorem 1 itself would be appreciated.",
"A simple idea of adding perturbation error to the counts, known from differentially-private literature, is nicely re-used by the authors and elegantly applied in a much broader (non-convex setting) and practical context than in a number of differentially-private and other related papers.",
"The generality of the approach, clear improvement over predecessors, and clarity of the writing makes the method worth publishing."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 3,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 5
} | 0 | 0 | 0.6 | 0.6 | 0.6 | 0.6 | 0.2 | 0.4 | 5 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3 | 1.989924 | 1.010076 |
Amortised MAP Inference for Image Super-resolution | Image super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation, often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible. A more desirable approach would employ Maximum a Posteriori (MAP) inference, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non-trivial, as it requires us to build a model for the image prior from samples. Here we introduce new methods for emph{amortised MAP inference} whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood-trained image prior. Our experiments show that the GAN based approach performs best on real image data. Lastly, we establish a connection between GANs and amortised variational inference as in e.g. variational autoencoders. | The paper presents an amortised MAP estimation method for SR problems. By learning a neural network which learns to project to an affine subspace of SR solutions which are consistent with the LR method the method enables finding propoer solutions with by using a variety of methods: GANs, noise assisted and density assisted optimisation. Results are nicely demonstrated on several datasets. I like the paper all in all, though I feel the writing can be polished by quite a bit and presentation should be made clearer. It was hard to follow at times and considering the subject matter is quite complicated making it clearer would help. Also, I would love to see some more analysis of the resulting the networks - what kind of features to they learn? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAmortised MAP Inference for Image Super-resolution\nImage super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation, often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible. A more desirable approach would employ Maximum a Posteriori (MAP) inference, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non-trivial, as it requires us to build a model for the image prior from samples. Here we introduce new methods for emph{amortised MAP inference} whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood-trained image prior. Our experiments show that the GAN based approach performs best on real image data. Lastly, we establish a connection between GANs and amortised variational inference as in e.g. variational autoencoders.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents an amortised MAP estimation method for SR problems. By learning a neural network which learns to project to an affine subspace of SR solutions which are consistent with the LR method the method enables finding propoer solutions with by using a variety of methods: GANs, noise assisted and density assisted optimisation. Results are nicely demonstrated on several datasets. I like the paper all in all, though I feel the writing can be polished by quite a bit and presentation should be made clearer. It was hard to follow at times and considering the subject matter is quite complicated making it clearer would help. Also, I would love to see some more analysis of the resulting the networks - what kind of features to they learn?",
"role": "assistant"
}
]
| [
"The paper presents an amortised MAP estimation method for SR problems.",
"By learning a neural network which learns to project to an affine subspace of SR solutions which are consistent with the LR method the method enables finding propoer solutions with by using a variety of methods: GANs, noise assisted and density assisted optimisation.",
"Results are nicely demonstrated on several datasets.",
"I like the paper all in all, though I feel the writing can be polished by quite a bit and presentation should be made clearer.",
"It was hard to follow at times and considering the subject matter is quite complicated making it clearer would help.",
"Also, I would love to see some more analysis of the resulting the networks - what kind of features to they learn?"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 6
} | 0.166667 | 0 | 0 | 0.666667 | 0.166667 | 0.333333 | 0.333333 | 0.5 | 6 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.166667 | 1.393328 | 0.773339 |
Stochastic Neural Networks for Hierarchical Reinforcement Learning | Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments show that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks. | Interesting work on hierarchical control, similar to the work of Heess et al. Experiments are strong and manage to complete benchmarks that previous work could not. Analysis of the experiments is a bit on the weaker side. (1) Like other reviewers, I find the use of the term ‘intrinsic’ motivation somewhat inappropriate (mostly because of its current meaning in RL). Pre-training robots with locomotion by rewarding speed (or rewarding grasping for a manipulating arm) is very geared towards the tasks they will later accomplish. The pre-training tasks from Heess et al., while not identical, are similar. (2) The Mutual Information regularization is elegant and works generally well, but does not seem to help in the more complex mazes 1,2 and 3. The authors note this - is there any interpretation or analysis for this result? (3) The factorization between S_agent and S_rest should be clearly detailed in the paper. Duan et al specify S_agent, but for replicability, S_rest should be clearly specified as well - did I miss it? (4) It would be interesting to provide some analysis of the switching behavior of the agent. More generally, some further analysis of the policies (failure modes, effects of switching time on performance) would have been welcome. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nStochastic Neural Networks for Hierarchical Reinforcement Learning\nDeep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments show that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nInteresting work on hierarchical control, similar to the work of Heess et al. Experiments are strong and manage to complete benchmarks that previous work could not. Analysis of the experiments is a bit on the weaker side. (1) Like other reviewers, I find the use of the term ‘intrinsic’ motivation somewhat inappropriate (mostly because of its current meaning in RL). Pre-training robots with locomotion by rewarding speed (or rewarding grasping for a manipulating arm) is very geared towards the tasks they will later accomplish. The pre-training tasks from Heess et al., while not identical, are similar. (2) The Mutual Information regularization is elegant and works generally well, but does not seem to help in the more complex mazes 1,2 and 3. The authors note this - is there any interpretation or analysis for this result? (3) The factorization between S_agent and S_rest should be clearly detailed in the paper. Duan et al specify S_agent, but for replicability, S_rest should be clearly specified as well - did I miss it? (4) It would be interesting to provide some analysis of the switching behavior of the agent. More generally, some further analysis of the policies (failure modes, effects of switching time on performance) would have been welcome.",
"role": "assistant"
}
]
| [
"Interesting work on hierarchical control, similar to the work of Heess et al.",
"Experiments are strong and manage to complete benchmarks that previous work could not.",
"Analysis of the experiments is a bit on the weaker side.",
"(1) Like other reviewers, I find the use of the term ‘intrinsic’ motivation somewhat inappropriate (mostly because of its current meaning in RL).",
"Pre-training robots with locomotion by rewarding speed (or rewarding grasping for a manipulating arm) is very geared towards the tasks they will later accomplish.",
"The pre-training tasks from Heess et al., while not identical, are similar.",
"(2) The Mutual Information regularization is elegant and works generally well, but does not seem to help in the more complex mazes 1,2 and 3.",
"The authors note this - is there any interpretation or analysis for this result?",
"(3) The factorization between S_agent and S_rest should be clearly detailed in the paper.",
"Duan et al specify S_agent, but for replicability, S_rest should be clearly specified as well - did I miss it?",
"(4) It would be interesting to provide some analysis of the switching behavior of the agent.",
"More generally, some further analysis of the policies (failure modes, effects of switching time on performance) would have been welcome."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 4,
"total": 12
} | 0.166667 | 0 | 0.166667 | 0.916667 | 0.25 | 0.25 | 0.166667 | 0.333333 | 12 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 2.234218 | 0.015782 |
Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks. | This paper prunes entire groups of filters in CNN so that they reduce computational cost and at the same time do not result in sparse connectivity. This result is important to speed up and compress neural networks while being able to use standard fully-connected linear algebra routines. The results are a 10% improvements in ResNet-like and ImageNet, which may be also achieved with better design of networks. New networks should have been also compared, but this we know it is time-consuming. A good paper with some useful results. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPruning Filters for Efficient ConvNets\nThe success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper prunes entire groups of filters in CNN so that they reduce computational cost and at the same time do not result in sparse connectivity. This result is important to speed up and compress neural networks while being able to use standard fully-connected linear algebra routines. The results are a 10% improvements in ResNet-like and ImageNet, which may be also achieved with better design of networks. New networks should have been also compared, but this we know it is time-consuming. A good paper with some useful results.",
"role": "assistant"
}
]
| [
"This paper prunes entire groups of filters in CNN so that they reduce computational cost and at the same time do not result in sparse connectivity.",
"This result is important to speed up and compress neural networks while being able to use standard fully-connected linear algebra routines.",
"The results are a 10% improvements in ResNet-like and ImageNet, which may be also achieved with better design of networks.",
"New networks should have been also compared, but this we know it is time-consuming.",
"A good paper with some useful results."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 5
} | 0 | 0 | 0.2 | 0.4 | 0.2 | 0 | 0.8 | 0.6 | 5 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.2 | 1.189924 | 1.010076 |
Multilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses | Developing accurate predictive models of sensory neurons is vital to understanding sensory processing and brain computations. The current standard approach to modeling neurons is to start with simple models and to incrementally add interpretable features. An alternative approach is to start with a more complex model that captures responses accurately, and then probe the fitted model structure to understand the neural computations. Here, we show that a multitask recurrent neural network (RNN) framework provides the flexibility necessary to model complex computations of neurons that cannot be captured by previous methods. Specifically, multilayer recurrent neural networks that share features across neurons outperform generalized linear models (GLMs) in predicting the spiking responses of parasol ganglion cells in the primate retina to natural images. The networks achieve good predictive performance given surprisingly small amounts of experimental training data. Additionally, we present a novel GLM-RNN hybrid model with separate spatial and temporal processing components which provides insights into the aspects of retinal processing better captured by the recurrent neural networks. | This is a clearly written paper with a nice, if straightforward, result: RNNs can be good predictive models of neuron firing rates in the retina. On the one hand, the primary scientific contribution seems to just be to confirm that this approach works. On this particular stimulus locked task the gains from using the RNN seemed relatively modest, and it hasn*t yet taught us anything new about the biology. On the other hand, this (along with the concurrent work of McIntosh et al.) is introducing neural network modeling to a field that isn*t currently using it, and where it should prove very effective. I think it would be very interesting to see the results of applying a framework like this one with LFP and other neurons as input and on a shorter discretization time scale. I suspect followup work building on this proof of concept will be increasingly exciting. Minor comments: Sec 3.2: I didn*t understand the role of the 0.833 ms bins. Use *epoch* throughout, rather than alternating between *epoch* and *pass through data*. Fig. 4 would be better with the x-axis on a log scale. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMultilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses\nDeveloping accurate predictive models of sensory neurons is vital to understanding sensory processing and brain computations. The current standard approach to modeling neurons is to start with simple models and to incrementally add interpretable features. An alternative approach is to start with a more complex model that captures responses accurately, and then probe the fitted model structure to understand the neural computations. Here, we show that a multitask recurrent neural network (RNN) framework provides the flexibility necessary to model complex computations of neurons that cannot be captured by previous methods. Specifically, multilayer recurrent neural networks that share features across neurons outperform generalized linear models (GLMs) in predicting the spiking responses of parasol ganglion cells in the primate retina to natural images. The networks achieve good predictive performance given surprisingly small amounts of experimental training data. Additionally, we present a novel GLM-RNN hybrid model with separate spatial and temporal processing components which provides insights into the aspects of retinal processing better captured by the recurrent neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is a clearly written paper with a nice, if straightforward, result: RNNs can be good predictive models of neuron firing rates in the retina. On the one hand, the primary scientific contribution seems to just be to confirm that this approach works. On this particular stimulus locked task the gains from using the RNN seemed relatively modest, and it hasn*t yet taught us anything new about the biology. On the other hand, this (along with the concurrent work of McIntosh et al.) is introducing neural network modeling to a field that isn*t currently using it, and where it should prove very effective. I think it would be very interesting to see the results of applying a framework like this one with LFP and other neurons as input and on a shorter discretization time scale. I suspect followup work building on this proof of concept will be increasingly exciting. Minor comments: Sec 3.2: I didn*t understand the role of the 0.833 ms bins. Use *epoch* throughout, rather than alternating between *epoch* and *pass through data*. Fig. 4 would be better with the x-axis on a log scale.",
"role": "assistant"
}
]
| [
"This is a clearly written paper with a nice, if straightforward, result: RNNs can be good predictive models of neuron firing rates in the retina.",
"On the one hand, the primary scientific contribution seems to just be to confirm that this approach works.",
"On this particular stimulus locked task the gains from using the RNN seemed relatively modest, and it hasn*t yet taught us anything new about the biology.",
"On the other hand, this (along with the concurrent work of McIntosh et al.)",
"is introducing neural network modeling to a field that isn*t currently using it, and where it should prove very effective.",
"I think it would be very interesting to see the results of applying a framework like this one with LFP and other neurons as input and on a shorter discretization time scale.",
"I suspect followup work building on this proof of concept will be increasingly exciting.",
"Minor comments: Sec 3.2: I didn*t understand the role of the 0.833 ms bins.",
"Use *epoch* throughout, rather than alternating between *epoch* and *pass through data*.",
"Fig.",
"4 would be better with the x-axis on a log scale."
] | {
"criticism": 2,
"example": 2,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 4,
"total": 11
} | 0.181818 | 0.181818 | 0.363636 | 0.636364 | 0.181818 | 0.181818 | 0.181818 | 0.363636 | 11 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.272727 | 2.209598 | 0.06313 |
Efficient Vector Representation for Documents through Corruption | We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time. | This paper discusses a method for computing vector representations for documents by using a skip-gram style learning mechanism with an added regularizer in the form of a global context vector with various bits of drop out. While none of the individual components proposed in this paper are new, I believe that the combination in this fashion is. Further, I appreciated the detailed analysis of model behaviour in section 3. The main downside to this submission is in its relative weakness on the empirical front. Arguably there are more interesting tasks than sentiment analysis and k-way classification! Likewise, why waste 2/3 of a page on t-sne projections rather than use that space for further analysis? While I am a bit disappointed by this reduced evaluation and agree with the other reviewers concerning soft baselines, I think this paper should be accepted: it*s an interesting algorithm, nicely composed and very efficient, so it*s reasonable to assume that other readers might have use for some of the ideas presented here. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEfficient Vector Representation for Documents through Corruption\nWe present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper discusses a method for computing vector representations for documents by using a skip-gram style learning mechanism with an added regularizer in the form of a global context vector with various bits of drop out. While none of the individual components proposed in this paper are new, I believe that the combination in this fashion is. Further, I appreciated the detailed analysis of model behaviour in section 3. The main downside to this submission is in its relative weakness on the empirical front. Arguably there are more interesting tasks than sentiment analysis and k-way classification! Likewise, why waste 2/3 of a page on t-sne projections rather than use that space for further analysis? While I am a bit disappointed by this reduced evaluation and agree with the other reviewers concerning soft baselines, I think this paper should be accepted: it*s an interesting algorithm, nicely composed and very efficient, so it*s reasonable to assume that other readers might have use for some of the ideas presented here.",
"role": "assistant"
}
]
| [
"This paper discusses a method for computing vector representations for documents by using a skip-gram style learning mechanism with an added regularizer in the form of a global context vector with various bits of drop out.",
"While none of the individual components proposed in this paper are new, I believe that the combination in this fashion is.",
"Further, I appreciated the detailed analysis of model behaviour in section 3.",
"The main downside to this submission is in its relative weakness on the empirical front.",
"Arguably there are more interesting tasks than sentiment analysis and k-way classification!",
"Likewise, why waste 2/3 of a page on t-sne projections rather than use that space for further analysis?",
"While I am a bit disappointed by this reduced evaluation and agree with the other reviewers concerning soft baselines, I think this paper should be accepted: it*s an interesting algorithm, nicely composed and very efficient, so it*s reasonable to assume that other readers might have use for some of the ideas presented here."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 7
} | 0.142857 | 0.142857 | 0.285714 | 0.857143 | 0.428571 | 0.142857 | 0 | 0.285714 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 2.285714 | 1.717547 | 0.568167 |
Efficient Vector Representation for Documents through Corruption | We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time. | This paper proposes learning document embeddings as a sum of the constituent word embeddings, which are jointly learned and randomly dropped out (*corrupted*) during training. While none of the pieces of this model are particularly novel, the result is an efficient learning algorithm for document representation with good empirical performance. Joint training of word and document embeddings is not a new idea, nor is the idea of enforcing the document to be represented by the sum of its word embeddings (see, e.g. *“The Sum of Its Parts”: Joint Learning of Word and Phrase Representations with Autoencoders* by Lebret and Collobert). Furthermore, the corruption mechanism is nothing other than traditional dropout on the input layer. Coupled with the word2vec-style loss and training methods, this paper offers little on the novelty front. On the other hand, it is very efficient at generation time, requiring only an average of the word embeddings rather than a complicated inference step as in Doc2Vec. Moreover, by construction, the embedding captures salient global information about the document -- it captures specifically that information that aids in local-context prediction. For such a simple model, the performance on sentiment analysis and document classification is quite encouraging. Overall, despite the lack of novelty, the simplicity, efficiency, and performance of this model make it worthy of wider readership and study, and I recommend acceptance. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEfficient Vector Representation for Documents through Corruption\nWe present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes learning document embeddings as a sum of the constituent word embeddings, which are jointly learned and randomly dropped out (*corrupted*) during training. While none of the pieces of this model are particularly novel, the result is an efficient learning algorithm for document representation with good empirical performance. Joint training of word and document embeddings is not a new idea, nor is the idea of enforcing the document to be represented by the sum of its word embeddings (see, e.g. *“The Sum of Its Parts”: Joint Learning of Word and Phrase Representations with Autoencoders* by Lebret and Collobert). Furthermore, the corruption mechanism is nothing other than traditional dropout on the input layer. Coupled with the word2vec-style loss and training methods, this paper offers little on the novelty front. On the other hand, it is very efficient at generation time, requiring only an average of the word embeddings rather than a complicated inference step as in Doc2Vec. Moreover, by construction, the embedding captures salient global information about the document -- it captures specifically that information that aids in local-context prediction. For such a simple model, the performance on sentiment analysis and document classification is quite encouraging. Overall, despite the lack of novelty, the simplicity, efficiency, and performance of this model make it worthy of wider readership and study, and I recommend acceptance.",
"role": "assistant"
}
]
| [
"This paper proposes learning document embeddings as a sum of the constituent word embeddings, which are jointly learned and randomly dropped out (*corrupted*) during training.",
"While none of the pieces of this model are particularly novel, the result is an efficient learning algorithm for document representation with good empirical performance.",
"Joint training of word and document embeddings is not a new idea, nor is the idea of enforcing the document to be represented by the sum of its word embeddings (see, e.g.",
"*“The Sum of Its Parts”: Joint Learning of Word and Phrase Representations with Autoencoders* by Lebret and Collobert).",
"Furthermore, the corruption mechanism is nothing other than traditional dropout on the input layer.",
"Coupled with the word2vec-style loss and training methods, this paper offers little on the novelty front.",
"On the other hand, it is very efficient at generation time, requiring only an average of the word embeddings rather than a complicated inference step as in Doc2Vec.",
"Moreover, by construction, the embedding captures salient global information about the document -- it captures specifically that information that aids in local-context prediction.",
"For such a simple model, the performance on sentiment analysis and document classification is quite encouraging.",
"Overall, despite the lack of novelty, the simplicity, efficiency, and performance of this model make it worthy of wider readership and study, and I recommend acceptance."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 10
} | 0.3 | 0 | 0.3 | 0.6 | 0.3 | 0.5 | 0.1 | 0.1 | 10 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 2.057958 | 0.142042 |
Transfer of View-manifold Learning to Similarity Perception of Novel Objects | We develop a model of perceptual similarity judgment based on re-training a deep convolution neural network (DCNN) that learns to associate different views of each 3D object to capture the notion of object persistence and continuity in our visual experience. The re-training process effectively performs distance metric learning under the object persistency constraints, to modify the view-manifold of object representations. It reduces the effective distance between the representations of different views of the same object without compromising the distance between those of the views of different objects, resulting in the untangling of the view-manifolds between individual objects within the same category and across categories. This untangling enables the model to discriminate and recognize objects within the same category, independent of viewpoints. We found that this ability is not limited to the trained objects, but transfers to novel objects in both trained and untrained categories, as well as to a variety of completely novel artificial synthetic objects. This transfer in learning suggests the modification of distance metrics in view- manifolds is more general and abstract, likely at the levels of parts, and independent of the specific objects or categories experienced during training. Interestingly, the resulting transformation of feature representation in the deep networks is found to significantly better match human perceptual similarity judgment than AlexNet, suggesting that object persistence could be an important constraint in the development of perceptual similarity judgment in biological neural networks. | On one hand this paper is fairly standard in that it uses deep metric learning with a Siamese architecture. On the other, the connections to human perception involving persistence is quite interesting. I*m not an expert in human vision, but the comparison in general and the induced hierarchical groupings in particular seem like something that should interest people in this community. The experimental suite is ok but I was disappointed that it is 100% synthetic. The authors could have used a minimally viable real dataset such as ALOI http://aloi.science.uva.nl . In summary, the mechanics of the proposed approach are not new, but the findings about the transfer of similarity judgement to novel object classes are interesting. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTransfer of View-manifold Learning to Similarity Perception of Novel Objects\nWe develop a model of perceptual similarity judgment based on re-training a deep convolution neural network (DCNN) that learns to associate different views of each 3D object to capture the notion of object persistence and continuity in our visual experience. The re-training process effectively performs distance metric learning under the object persistency constraints, to modify the view-manifold of object representations. It reduces the effective distance between the representations of different views of the same object without compromising the distance between those of the views of different objects, resulting in the untangling of the view-manifolds between individual objects within the same category and across categories. This untangling enables the model to discriminate and recognize objects within the same category, independent of viewpoints. We found that this ability is not limited to the trained objects, but transfers to novel objects in both trained and untrained categories, as well as to a variety of completely novel artificial synthetic objects. This transfer in learning suggests the modification of distance metrics in view- manifolds is more general and abstract, likely at the levels of parts, and independent of the specific objects or categories experienced during training. Interestingly, the resulting transformation of feature representation in the deep networks is found to significantly better match human perceptual similarity judgment than AlexNet, suggesting that object persistence could be an important constraint in the development of perceptual similarity judgment in biological neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nOn one hand this paper is fairly standard in that it uses deep metric learning with a Siamese architecture. On the other, the connections to human perception involving persistence is quite interesting. I*m not an expert in human vision, but the comparison in general and the induced hierarchical groupings in particular seem like something that should interest people in this community. The experimental suite is ok but I was disappointed that it is 100% synthetic. The authors could have used a minimally viable real dataset such as ALOI http://aloi.science.uva.nl . In summary, the mechanics of the proposed approach are not new, but the findings about the transfer of similarity judgement to novel object classes are interesting.",
"role": "assistant"
}
]
| [
"On one hand this paper is fairly standard in that it uses deep metric learning with a Siamese architecture.",
"On the other, the connections to human perception involving persistence is quite interesting.",
"I*m not an expert in human vision, but the comparison in general and the induced hierarchical groupings in particular seem like something that should interest people in this community.",
"The experimental suite is ok but I was disappointed that it is 100% synthetic.",
"The authors could have used a minimally viable real dataset such as ALOI http://aloi.science.uva.nl .",
"In summary, the mechanics of the proposed approach are not new, but the findings about the transfer of similarity judgement to novel object classes are interesting."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 6
} | 0.166667 | 0 | 0.5 | 0.833333 | 0.333333 | 0 | 0.166667 | 0.166667 | 6 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.166667 | 1.393328 | 0.773339 |
What does it take to generate natural textures? | Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters. | This paper provides an interesting analysis of the conditions which enable generation of natural looking textures. The results is quite surprising, and analysis is quite thorough. I do think the evaluation methods require more work, but as other reviewers mentioned this could be an interesting line of work moving forwards and does not take too much from this current paper which, I think, should be accepted. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWhat does it take to generate natural textures?\nNatural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper provides an interesting analysis of the conditions which enable generation of natural looking textures. The results is quite surprising, and analysis is quite thorough. I do think the evaluation methods require more work, but as other reviewers mentioned this could be an interesting line of work moving forwards and does not take too much from this current paper which, I think, should be accepted.",
"role": "assistant"
}
]
| [
"This paper provides an interesting analysis of the conditions which enable generation of natural looking textures.",
"The results is quite surprising, and analysis is quite thorough.",
"I do think the evaluation methods require more work, but as other reviewers mentioned this could be an interesting line of work moving forwards and does not take too much from this current paper which, I think, should be accepted."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 3,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0.333333 | 0 | 0.666667 | 1 | 1 | 0 | 0.333333 | 0.333333 | 3 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 3.666667 | 2.088424 | 1.578243 |
What does it take to generate natural textures? | Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters. | The framework of Gatys et al. demonstrated that correlation statistics (empirical Gram matrices) of deep feature responses provide an excellent characterisation of visual textures. This paper investigates in detail which kind of deep or shallow networks may work well in this framework. One of the main findings is that that very shallow nets, consisting of a single filter bank with random weights, work surprisingly well, and for simple and regular textures may produce results which are visually superior to complex data-adapted filters such as the ones in networks like VGG-19. More broadly, the paper contains an interesting and informative discussion on the strength and limitations on such methods for texture synthesis. Figure 4 shows that the optimisation of images with respect to shallow filter banks may result in texture images that have a lower VGG-19 loss than optimising the VGG-19 objective directly. This is imputed to the difficulty of optimising the highly non-linear VGG-19 cost function, which is a reasonable explanation. In the new supplementary material, the authors show that better optimisation results can be obtained by initialising the VGG-19-based optimisation with the shallow network optimisation results, which is a useful complement to the original experiments. The main limitation of the paper is that it does not systematically compare different methods against a quantifiable objective. It is trivial to define image statistics that would allow to simply generate an exact copy of any reference texture, hence with very good visual quality. Such trivial statistics would also be very shallow. The aim is instead to capture a texture distribution, and measuring how well a method meets this challenge remains an open problem. Hence, while the empirical results seem to confirm the intuition that simple statistics are good enough for texture synthesis both in terms of quality and diversity (when compared to more complex statistics), it is difficult to conclusively confirm that this is the case. The authors indicate that diversity could be measured in terms of entropy. This is reasonable, but, as they acknowledge in their answers to questions, difficult to do in practice. Furthermore, this would still not account for the other aspect of the problem, namely visual quality. They also suggest to perform a psychophysical assessment, which may be the only practical way of addressing this problem, but deem that to be material for future work. Overall, since evaluation of image generation is such an hard problem, I think the paper still has sufficient strengths to warrant publication in ICLR. Still, some form of psychophysical assessment would be useful to confirm the intuitions that, at present, can only be obtained by inspecting the figure sin the paper and in the supplementary material. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWhat does it take to generate natural textures?\nNatural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe framework of Gatys et al. demonstrated that correlation statistics (empirical Gram matrices) of deep feature responses provide an excellent characterisation of visual textures. This paper investigates in detail which kind of deep or shallow networks may work well in this framework. One of the main findings is that that very shallow nets, consisting of a single filter bank with random weights, work surprisingly well, and for simple and regular textures may produce results which are visually superior to complex data-adapted filters such as the ones in networks like VGG-19. More broadly, the paper contains an interesting and informative discussion on the strength and limitations on such methods for texture synthesis. Figure 4 shows that the optimisation of images with respect to shallow filter banks may result in texture images that have a lower VGG-19 loss than optimising the VGG-19 objective directly. This is imputed to the difficulty of optimising the highly non-linear VGG-19 cost function, which is a reasonable explanation. In the new supplementary material, the authors show that better optimisation results can be obtained by initialising the VGG-19-based optimisation with the shallow network optimisation results, which is a useful complement to the original experiments. The main limitation of the paper is that it does not systematically compare different methods against a quantifiable objective. It is trivial to define image statistics that would allow to simply generate an exact copy of any reference texture, hence with very good visual quality. Such trivial statistics would also be very shallow. The aim is instead to capture a texture distribution, and measuring how well a method meets this challenge remains an open problem. Hence, while the empirical results seem to confirm the intuition that simple statistics are good enough for texture synthesis both in terms of quality and diversity (when compared to more complex statistics), it is difficult to conclusively confirm that this is the case. The authors indicate that diversity could be measured in terms of entropy. This is reasonable, but, as they acknowledge in their answers to questions, difficult to do in practice. Furthermore, this would still not account for the other aspect of the problem, namely visual quality. They also suggest to perform a psychophysical assessment, which may be the only practical way of addressing this problem, but deem that to be material for future work. Overall, since evaluation of image generation is such an hard problem, I think the paper still has sufficient strengths to warrant publication in ICLR. Still, some form of psychophysical assessment would be useful to confirm the intuitions that, at present, can only be obtained by inspecting the figure sin the paper and in the supplementary material.",
"role": "assistant"
}
]
| [
"The framework of Gatys et al.",
"demonstrated that correlation statistics (empirical Gram matrices) of deep feature responses provide an excellent characterisation of visual textures.",
"This paper investigates in detail which kind of deep or shallow networks may work well in this framework.",
"One of the main findings is that that very shallow nets, consisting of a single filter bank with random weights, work surprisingly well, and for simple and regular textures may produce results which are visually superior to complex data-adapted filters such as the ones in networks like VGG-19.",
"More broadly, the paper contains an interesting and informative discussion on the strength and limitations on such methods for texture synthesis.",
"Figure 4 shows that the optimisation of images with respect to shallow filter banks may result in texture images that have a lower VGG-19 loss than optimising the VGG-19 objective directly.",
"This is imputed to the difficulty of optimising the highly non-linear VGG-19 cost function, which is a reasonable explanation.",
"In the new supplementary material, the authors show that better optimisation results can be obtained by initialising the VGG-19-based optimisation with the shallow network optimisation results, which is a useful complement to the original experiments.",
"The main limitation of the paper is that it does not systematically compare different methods against a quantifiable objective.",
"It is trivial to define image statistics that would allow to simply generate an exact copy of any reference texture, hence with very good visual quality.",
"Such trivial statistics would also be very shallow.",
"The aim is instead to capture a texture distribution, and measuring how well a method meets this challenge remains an open problem.",
"Hence, while the empirical results seem to confirm the intuition that simple statistics are good enough for texture synthesis both in terms of quality and diversity (when compared to more complex statistics), it is difficult to conclusively confirm that this is the case.",
"The authors indicate that diversity could be measured in terms of entropy.",
"This is reasonable, but, as they acknowledge in their answers to questions, difficult to do in practice.",
"Furthermore, this would still not account for the other aspect of the problem, namely visual quality.",
"They also suggest to perform a psychophysical assessment, which may be the only practical way of addressing this problem, but deem that to be material for future work.",
"Overall, since evaluation of image generation is such an hard problem, I think the paper still has sufficient strengths to warrant publication in ICLR.",
"Still, some form of psychophysical assessment would be useful to confirm the intuitions that, at present, can only be obtained by inspecting the figure sin the paper and in the supplementary material."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 6,
"materials_and_methods": 16,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 7,
"suggestion_and_solution": 4,
"total": 19
} | 0.210526 | 0.052632 | 0.315789 | 0.842105 | 0.210526 | 0.052632 | 0.368421 | 0.210526 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.263158 | 1.69499 | 0.568167 |
Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution during the early phases of training, thus providing a unified solution to the missing modes problem. | This paper does a good job of clearly articulating a problem in contemporary training of GANs, coming up with an intuitive solution via regularizers in addition to optimizing only the discriminator score, and conducting clever experiments to show that the regularizers have the intended effect. There are recent related and improved GAN variants (ALI, VAEGAN, potentially others), which are included in qualitative comparisons, but not quantitative. It would be interesting to see whether these other types of modified GANs already make some progress in addressing the missing modes problem. If code is available for those methods, the paper could be strengthened a lot by running the mode-missing benchmarks on them (even if it turns out that a *competing* method can get a better result in some cases). The experiments on digits and faces are good for validating the proposed regularizers. However, if the authors can show better results on CIFAR-10, ImageNet, MS-COCO or some other more diverse and challenging dataset, I would be more convinced of the value of the proposed method. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMode Regularized Generative Adversarial Networks\nAlthough Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution during the early phases of training, thus providing a unified solution to the missing modes problem.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper does a good job of clearly articulating a problem in contemporary training of GANs, coming up with an intuitive solution via regularizers in addition to optimizing only the discriminator score, and conducting clever experiments to show that the regularizers have the intended effect. There are recent related and improved GAN variants (ALI, VAEGAN, potentially others), which are included in qualitative comparisons, but not quantitative. It would be interesting to see whether these other types of modified GANs already make some progress in addressing the missing modes problem. If code is available for those methods, the paper could be strengthened a lot by running the mode-missing benchmarks on them (even if it turns out that a *competing* method can get a better result in some cases). The experiments on digits and faces are good for validating the proposed regularizers. However, if the authors can show better results on CIFAR-10, ImageNet, MS-COCO or some other more diverse and challenging dataset, I would be more convinced of the value of the proposed method.",
"role": "assistant"
}
]
| [
"This paper does a good job of clearly articulating a problem in contemporary training of GANs, coming up with an intuitive solution via regularizers in addition to optimizing only the discriminator score, and conducting clever experiments to show that the regularizers have the intended effect.",
"There are recent related and improved GAN variants (ALI, VAEGAN, potentially others), which are included in qualitative comparisons, but not quantitative.",
"It would be interesting to see whether these other types of modified GANs already make some progress in addressing the missing modes problem.",
"If code is available for those methods, the paper could be strengthened a lot by running the mode-missing benchmarks on them (even if it turns out that a *competing* method can get a better result in some cases).",
"The experiments on digits and faces are good for validating the proposed regularizers.",
"However, if the authors can show better results on CIFAR-10, ImageNet, MS-COCO or some other more diverse and challenging dataset, I would be more convinced of the value of the proposed method."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 6
} | 0 | 0 | 0 | 1 | 0.333333 | 0.166667 | 0.5 | 0.5 | 6 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.5 | 1.726661 | 0.773339 |
Highway and Residual Networks learn Unrolled Iterative Estimation | The past year saw the introduction of new architectures such as Highway networks and Residual networks which, for the first time, enabled the training of feedforward networks with dozens to hundreds of layers using simple gradient descent. While depth of representation has been posited as a primary reason for their success, there are indications that these architectures defy a popular view of deep learning as a hierarchical computation of increasingly abstract features at each layer. In this report, we argue that this view is incomplete and does not adequately explain several recent findings. We propose an alternative viewpoint based on unrolled iterative estimation---a group of successive layers iteratively refine their estimates of the same features instead of computing an entirely new representation. We demonstrate that this viewpoint directly leads to the construction of highway and residual networks. Finally we provide preliminary experiments to discuss the similarities and differences between the two architectures. | The paper describes an alternative view on hierarchical feature representations in deep neural networks. The viewpoint of refining representations is well motivated and is in agreement with the success of recent model structures like ResNets. Pros: - Good motivation for the effectiveness of ResNets and Highway networks - Convincing analysis and evaluation Cons: - The effect of this finding of the interpretation of batch-normalization is only captured briefly but seems to be significant - Explanation of findings in (Zeiler & Fergus (2014)) using UIE viewpoint missing Remarks: - Missing word in line 223: *that it *is* valid* | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHighway and Residual Networks learn Unrolled Iterative Estimation\nThe past year saw the introduction of new architectures such as Highway networks and Residual networks which, for the first time, enabled the training of feedforward networks with dozens to hundreds of layers using simple gradient descent. While depth of representation has been posited as a primary reason for their success, there are indications that these architectures defy a popular view of deep learning as a hierarchical computation of increasingly abstract features at each layer. In this report, we argue that this view is incomplete and does not adequately explain several recent findings. We propose an alternative viewpoint based on unrolled iterative estimation---a group of successive layers iteratively refine their estimates of the same features instead of computing an entirely new representation. We demonstrate that this viewpoint directly leads to the construction of highway and residual networks. Finally we provide preliminary experiments to discuss the similarities and differences between the two architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper describes an alternative view on hierarchical feature representations in deep neural networks. The viewpoint of refining representations is well motivated and is in agreement with the success of recent model structures like ResNets. Pros: - Good motivation for the effectiveness of ResNets and Highway networks - Convincing analysis and evaluation Cons: - The effect of this finding of the interpretation of batch-normalization is only captured briefly but seems to be significant - Explanation of findings in (Zeiler & Fergus (2014)) using UIE viewpoint missing Remarks: - Missing word in line 223: *that it *is* valid*",
"role": "assistant"
}
]
| [
"The paper describes an alternative view on hierarchical feature representations in deep neural networks.",
"The viewpoint of refining representations is well motivated and is in agreement with the success of recent model structures like ResNets.",
"Pros: - Good motivation for the effectiveness of ResNets and Highway networks - Convincing analysis and evaluation Cons: - The effect of this finding of the interpretation of batch-normalization is only captured briefly but seems to be significant - Explanation of findings in (Zeiler & Fergus (2014)) using UIE viewpoint missing Remarks: - Missing word in line 223: *that it *is* valid*"
] | {
"criticism": 0,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 3
} | 0 | 0.333333 | 0.333333 | 0.666667 | 0.666667 | 0.333333 | 0.333333 | 0 | 3 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.666667 | 1.088424 | 1.578243 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 10