title
stringlengths 2
153
| abstract
stringlengths 0
2.15k
| review
stringlengths 97
8.43k
| source
stringclasses 4
values | review_src
stringclasses 1
value | thinking_trace
stringclasses 1
value | conversations
listlengths 2
2
| review_sentences
sequencelengths 1
61
| criteria_count
dict | criticism
float64 0
1
| example
float64 0
1
| importance_and_relevance
float64 0
1
| materials_and_methods
float64 0
1
| praise
float64 0
1
| presentation_and_reporting
float64 0
1
| results_and_discussion
float64 0
1
| suggestion_and_solution
float64 0
1
| total
int64 1
61
| has_criticism
int64 0
1
| has_example
int64 0
1
| has_importance_and_relevance
int64 0
1
| has_materials_and_methods
int64 0
1
| has_praise
int64 0
1
| has_presentation_and_reporting
int64 0
1
| has_results_and_discussion
int64 0
1
| has_suggestion_and_solution
int64 0
1
| reward_value
float64 2.13
6
| reward_value_length_adjusted
float64 -34.13
3.73
| length_penalty
float64 0
36.4
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing generators learn to *linearize semantics* in the latent space of such models. Intuitively, such latent spaces may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning. | The authors extend GANs by an inference path from the data space to the latent space and a discriminator that operates on the joint latend/data space. They show that the theoretical properties of GANs still hold for BiGAN and evaluate the features learned unsupervised in the inference path with respect to performance on supervised tasks after retraining deeper layers. I see one structural issue with this paper: Given that, as stated in the abstract, the main purpose of the paper is to learn unsupervised features (and not to improve GANs), the paper might spent too much space on detailing the relationship to GANs and all the theoretical properties. It is not clear whether they actually would help with the goal of learning good features. While reading the paper, I actually totally forgot about the unsupervised features until they reappeared on page 6. I think it would be helpful if the text of the paper would be more aligned with this main story. Still, the BiGAN framework is an elegant and compelling extension to GANs. However, it is not obvious how much the theoretical properties help us as the model is clearly not fully converged. To me, especially Figure 4 seems to suggest that G(E(x)) might be doing not much more than some kind of nearest neighbour retrival (and indeed one criticism for GANs has always been that they might just memorize some samples). By the way, it would be very interesting to know how well the discriminator actually performs after training. Coming back to the goal of learning powerful features: The method does not reach state-of-the-art performance on most evaluated tasks (Table 2 and 3) but performs competitive and it would be interesting to see how much this improves if the BiGAN training (and the convolutional architecture used) would be improved. The paper is very well written and provides most necessary details, although some more details on the training (learning rates, initialization) would be helpful for reproducing the results. Overall I think the paper provides a very interesting framework for further research, even though the results presented here are not too impressive both with respect to the feature evaluation (and the GAN learning). Minor: It might be helpful to highlight the best performance numbers in Tables 2 and 3. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdversarial Feature Learning\nThe ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing generators learn to *linearize semantics* in the latent space of such models. Intuitively, such latent spaces may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors extend GANs by an inference path from the data space to the latent space and a discriminator that operates on the joint latend/data space. They show that the theoretical properties of GANs still hold for BiGAN and evaluate the features learned unsupervised in the inference path with respect to performance on supervised tasks after retraining deeper layers. I see one structural issue with this paper: Given that, as stated in the abstract, the main purpose of the paper is to learn unsupervised features (and not to improve GANs), the paper might spent too much space on detailing the relationship to GANs and all the theoretical properties. It is not clear whether they actually would help with the goal of learning good features. While reading the paper, I actually totally forgot about the unsupervised features until they reappeared on page 6. I think it would be helpful if the text of the paper would be more aligned with this main story. Still, the BiGAN framework is an elegant and compelling extension to GANs. However, it is not obvious how much the theoretical properties help us as the model is clearly not fully converged. To me, especially Figure 4 seems to suggest that G(E(x)) might be doing not much more than some kind of nearest neighbour retrival (and indeed one criticism for GANs has always been that they might just memorize some samples). By the way, it would be very interesting to know how well the discriminator actually performs after training. Coming back to the goal of learning powerful features: The method does not reach state-of-the-art performance on most evaluated tasks (Table 2 and 3) but performs competitive and it would be interesting to see how much this improves if the BiGAN training (and the convolutional architecture used) would be improved. The paper is very well written and provides most necessary details, although some more details on the training (learning rates, initialization) would be helpful for reproducing the results. Overall I think the paper provides a very interesting framework for further research, even though the results presented here are not too impressive both with respect to the feature evaluation (and the GAN learning). Minor: It might be helpful to highlight the best performance numbers in Tables 2 and 3.",
"role": "assistant"
}
]
| [
"The authors extend GANs by an inference path from the data space to the latent space and a discriminator that operates on the joint latend/data space.",
"They show that the theoretical properties of GANs still hold for BiGAN and evaluate the features learned unsupervised in the inference path with respect to performance on supervised tasks after retraining deeper layers.",
"I see one structural issue with this paper: Given that, as stated in the abstract, the main purpose of the paper is to learn unsupervised features (and not to improve GANs), the paper might spent too much space on detailing the relationship to GANs and all the theoretical properties.",
"It is not clear whether they actually would help with the goal of learning good features.",
"While reading the paper, I actually totally forgot about the unsupervised features until they reappeared on page 6.",
"I think it would be helpful if the text of the paper would be more aligned with this main story.",
"Still, the BiGAN framework is an elegant and compelling extension to GANs.",
"However, it is not obvious how much the theoretical properties help us as the model is clearly not fully converged.",
"To me, especially Figure 4 seems to suggest that G(E(x)) might be doing not much more than some kind of nearest neighbour retrival (and indeed one criticism for GANs has always been that they might just memorize some samples).",
"By the way, it would be very interesting to know how well the discriminator actually performs after training.",
"Coming back to the goal of learning powerful features: The method does not reach state-of-the-art performance on most evaluated tasks (Table 2 and 3) but performs competitive and it would be interesting to see how much this improves if the BiGAN training (and the convolutional architecture used) would be improved.",
"The paper is very well written and provides most necessary details, although some more details on the training (learning rates, initialization) would be helpful for reproducing the results.",
"Overall I think the paper provides a very interesting framework for further research, even though the results presented here are not too impressive both with respect to the feature evaluation (and the GAN learning).",
"Minor: It might be helpful to highlight the best performance numbers in Tables 2 and 3."
] | {
"criticism": 5,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 7,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 4,
"suggestion_and_solution": 5,
"total": 14
} | 0.357143 | 0.071429 | 0.142857 | 0.5 | 0.285714 | 0.214286 | 0.285714 | 0.357143 | 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.214286 | 2.198503 | 0.015782 |
Batch Policy Gradient Methods for Improving Neural Conversation Models | We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (_x0008_pg). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset. | The author propose to use a off-policy actor-critic algorithm in a batch-setting to improve chat-bots. The approach is well motivated and the paper is well written, except for some intuitions for why the batch version outperforms the on-line version (see comments on *clarification regarding batch vs. online setting*). The artificial experiments are instructive, and the real-world experiments were performed very thoroughly although the results show only modest improvement. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBatch Policy Gradient Methods for Improving Neural Conversation Models\nWe study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (_x0008_pg). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe author propose to use a off-policy actor-critic algorithm in a batch-setting to improve chat-bots. The approach is well motivated and the paper is well written, except for some intuitions for why the batch version outperforms the on-line version (see comments on *clarification regarding batch vs. online setting*). The artificial experiments are instructive, and the real-world experiments were performed very thoroughly although the results show only modest improvement.",
"role": "assistant"
}
]
| [
"The author propose to use a off-policy actor-critic algorithm in a batch-setting to improve chat-bots.",
"The approach is well motivated and the paper is well written, except for some intuitions for why the batch version outperforms the on-line version (see comments on *clarification regarding batch vs. online setting*).",
"The artificial experiments are instructive, and the real-world experiments were performed very thoroughly although the results show only modest improvement."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0 | 0 | 0 | 1 | 0.666667 | 0.333333 | 0.333333 | 0.333333 | 3 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.666667 | 1.088424 | 1.578243 |
Support Regularized Sparse Coding and Its Fast Encoder | Sparse coding represents a signal by a linear combination of only a few atoms of a learned over-complete dictionary. While sparse coding exhibits compelling performance for various machine learning tasks, the process of obtaining sparse code with fixed dictionary is independent for each data point without considering the geometric information and manifold structure of the entire data. We propose Support Regularized Sparse Coding (SRSC) which produces sparse codes that account for the manifold structure of the data by encouraging nearby data in the manifold to choose similar dictionary atoms. In this way, the obtained support regularized sparse codes capture the locally linear structure of the data manifold and enjoy robustness to data noise. We present the optimization algorithm of SRSC with theoretical guarantee for the optimization over the sparse codes. We also propose a feed-forward neural network termed Deep Support Regularized Sparse Coding (Deep-SRSC) as a fast encoder to approximate the sparse codes generated by SRSC. Extensive experimental results demonstrate the effectiveness of SRSC and Deep-SRSC. | The work proposes to use the geometry of data (that is considered to be known a priori) in order to have more consistent sparse coding. Namely, two data samples that are similar or neighbours, should have a sparse code that is similar (in terms of support). The general idea is not unique, but it is an interesting one (if one admits that the adjacency matrix A is known a priori), and the novelty mostly lies on the definition of the regularisation term that is an l1-norm (while other techniques would mostly use l2 regularisation). Based on this idea, the authors develop a new SRSC algorithm, which is analysed in detail and shown to perform better than its competitors based on l2 sparse coding regularisation and other schemes in terms of clustering performance. Inspired by LISTA, the authors then propose an approximate solution to the SRSC problem, called Deep-SRSC, that acts as a sort of fast encoder. Here too, the idea is interesting and seems to be quite efficient from experiments on USPS data, even if the framework seems to be strongly inspired from LISTA. That scheme should however be better motivated, by the limitations of SRSC that should be presented more clearly. Overall, the paper is well written, and pretty complete. It is not extremely original in its main ideas though, but the actual algorithm and implementation seem new and effective. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSupport Regularized Sparse Coding and Its Fast Encoder\nSparse coding represents a signal by a linear combination of only a few atoms of a learned over-complete dictionary. While sparse coding exhibits compelling performance for various machine learning tasks, the process of obtaining sparse code with fixed dictionary is independent for each data point without considering the geometric information and manifold structure of the entire data. We propose Support Regularized Sparse Coding (SRSC) which produces sparse codes that account for the manifold structure of the data by encouraging nearby data in the manifold to choose similar dictionary atoms. In this way, the obtained support regularized sparse codes capture the locally linear structure of the data manifold and enjoy robustness to data noise. We present the optimization algorithm of SRSC with theoretical guarantee for the optimization over the sparse codes. We also propose a feed-forward neural network termed Deep Support Regularized Sparse Coding (Deep-SRSC) as a fast encoder to approximate the sparse codes generated by SRSC. Extensive experimental results demonstrate the effectiveness of SRSC and Deep-SRSC.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe work proposes to use the geometry of data (that is considered to be known a priori) in order to have more consistent sparse coding. Namely, two data samples that are similar or neighbours, should have a sparse code that is similar (in terms of support). The general idea is not unique, but it is an interesting one (if one admits that the adjacency matrix A is known a priori), and the novelty mostly lies on the definition of the regularisation term that is an l1-norm (while other techniques would mostly use l2 regularisation). Based on this idea, the authors develop a new SRSC algorithm, which is analysed in detail and shown to perform better than its competitors based on l2 sparse coding regularisation and other schemes in terms of clustering performance. Inspired by LISTA, the authors then propose an approximate solution to the SRSC problem, called Deep-SRSC, that acts as a sort of fast encoder. Here too, the idea is interesting and seems to be quite efficient from experiments on USPS data, even if the framework seems to be strongly inspired from LISTA. That scheme should however be better motivated, by the limitations of SRSC that should be presented more clearly. Overall, the paper is well written, and pretty complete. It is not extremely original in its main ideas though, but the actual algorithm and implementation seem new and effective.",
"role": "assistant"
}
]
| [
"The work proposes to use the geometry of data (that is considered to be known a priori) in order to have more consistent sparse coding.",
"Namely, two data samples that are similar or neighbours, should have a sparse code that is similar (in terms of support).",
"The general idea is not unique, but it is an interesting one (if one admits that the adjacency matrix A is known a priori), and the novelty mostly lies on the definition of the regularisation term that is an l1-norm (while other techniques would mostly use l2 regularisation).",
"Based on this idea, the authors develop a new SRSC algorithm, which is analysed in detail and shown to perform better than its competitors based on l2 sparse coding regularisation and other schemes in terms of clustering performance.",
"Inspired by LISTA, the authors then propose an approximate solution to the SRSC problem, called Deep-SRSC, that acts as a sort of fast encoder.",
"Here too, the idea is interesting and seems to be quite efficient from experiments on USPS data, even if the framework seems to be strongly inspired from LISTA.",
"That scheme should however be better motivated, by the limitations of SRSC that should be presented more clearly.",
"Overall, the paper is well written, and pretty complete.",
"It is not extremely original in its main ideas though, but the actual algorithm and implementation seem new and effective."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 7,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 9
} | 0.111111 | 0 | 0.333333 | 0.777778 | 0.444444 | 0.333333 | 0 | 0.222222 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 2.222222 | 1.969703 | 0.252519 |
Support Regularized Sparse Coding and Its Fast Encoder | Sparse coding represents a signal by a linear combination of only a few atoms of a learned over-complete dictionary. While sparse coding exhibits compelling performance for various machine learning tasks, the process of obtaining sparse code with fixed dictionary is independent for each data point without considering the geometric information and manifold structure of the entire data. We propose Support Regularized Sparse Coding (SRSC) which produces sparse codes that account for the manifold structure of the data by encouraging nearby data in the manifold to choose similar dictionary atoms. In this way, the obtained support regularized sparse codes capture the locally linear structure of the data manifold and enjoy robustness to data noise. We present the optimization algorithm of SRSC with theoretical guarantee for the optimization over the sparse codes. We also propose a feed-forward neural network termed Deep Support Regularized Sparse Coding (Deep-SRSC) as a fast encoder to approximate the sparse codes generated by SRSC. Extensive experimental results demonstrate the effectiveness of SRSC and Deep-SRSC. | In this paper the authors propose a method to explicitly regularize sparse coding to encode neighbouring datapoints with similar sets of atoms from the dictionary by clustering training examples with KNN in input space. The resulting algorithm is relatively complex and computationally relatively expensive, but the authors provide detailed derivations and use arguments from proximal gradient descent methods to prove convergence (I did not follow all the derivations, only some). In general the paper is well written and the authors explain the motivation behind the algorithms design in detail. In the abstract the authors mention “extensive experimental results …”, but I find the experiments not very convincing: With experiments on the USPS handwritten digits dataset (why not MNIST?), COIL-20 and COIL-100 and UCI, the datasets are all relatively small and the algorithm is run with dictionary sizes between p=100 to p=500. This seems surprising because the authors state that they implemented SRSC in “CUDA C++ with extreme efficiency” (page 10). But more importantly, I find it hard to interpret and compare the results: The paper reports accuracy and and normalized mutual information for a image retrieval / clustering task where the proposed SRSC is used as a feature extractor. The improvements relative to standard Sparse Coding seem very small (often < 1% in terms of NMI; it looks more promising in terms of accuracy) and if I understand the description on page 11 correctly, than the test set was used to select some hyperparameters (the best similarity measure for clustering step)? There is no comparisons to other baselines / state of the art image clustering methods. Besides of providing features for a small scale image clustering system, are there maybe ways to more directly evaluate the properties and qualities of a sparse coding approach? E.g. reconstruction error / sparsity; maybe even denoising performance? In summary, I think in it current form the paper lacks the evaluation and experimental results for an ICLR publication. Intuitively, I agree with the authors that the proposed regularization is an interesting direction, but I don’t see experiments that directly show that the regularization has the desired effect; and the improvements in the clustering task where SRSC is used as a feature extractor are very modest. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSupport Regularized Sparse Coding and Its Fast Encoder\nSparse coding represents a signal by a linear combination of only a few atoms of a learned over-complete dictionary. While sparse coding exhibits compelling performance for various machine learning tasks, the process of obtaining sparse code with fixed dictionary is independent for each data point without considering the geometric information and manifold structure of the entire data. We propose Support Regularized Sparse Coding (SRSC) which produces sparse codes that account for the manifold structure of the data by encouraging nearby data in the manifold to choose similar dictionary atoms. In this way, the obtained support regularized sparse codes capture the locally linear structure of the data manifold and enjoy robustness to data noise. We present the optimization algorithm of SRSC with theoretical guarantee for the optimization over the sparse codes. We also propose a feed-forward neural network termed Deep Support Regularized Sparse Coding (Deep-SRSC) as a fast encoder to approximate the sparse codes generated by SRSC. Extensive experimental results demonstrate the effectiveness of SRSC and Deep-SRSC.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nIn this paper the authors propose a method to explicitly regularize sparse coding to encode neighbouring datapoints with similar sets of atoms from the dictionary by clustering training examples with KNN in input space. The resulting algorithm is relatively complex and computationally relatively expensive, but the authors provide detailed derivations and use arguments from proximal gradient descent methods to prove convergence (I did not follow all the derivations, only some). In general the paper is well written and the authors explain the motivation behind the algorithms design in detail. In the abstract the authors mention “extensive experimental results …”, but I find the experiments not very convincing: With experiments on the USPS handwritten digits dataset (why not MNIST?), COIL-20 and COIL-100 and UCI, the datasets are all relatively small and the algorithm is run with dictionary sizes between p=100 to p=500. This seems surprising because the authors state that they implemented SRSC in “CUDA C++ with extreme efficiency” (page 10). But more importantly, I find it hard to interpret and compare the results: The paper reports accuracy and and normalized mutual information for a image retrieval / clustering task where the proposed SRSC is used as a feature extractor. The improvements relative to standard Sparse Coding seem very small (often < 1% in terms of NMI; it looks more promising in terms of accuracy) and if I understand the description on page 11 correctly, than the test set was used to select some hyperparameters (the best similarity measure for clustering step)? There is no comparisons to other baselines / state of the art image clustering methods. Besides of providing features for a small scale image clustering system, are there maybe ways to more directly evaluate the properties and qualities of a sparse coding approach? E.g. reconstruction error / sparsity; maybe even denoising performance? In summary, I think in it current form the paper lacks the evaluation and experimental results for an ICLR publication. Intuitively, I agree with the authors that the proposed regularization is an interesting direction, but I don’t see experiments that directly show that the regularization has the desired effect; and the improvements in the clustering task where SRSC is used as a feature extractor are very modest.",
"role": "assistant"
}
]
| [
"In this paper the authors propose a method to explicitly regularize sparse coding to encode neighbouring datapoints with similar sets of atoms from the dictionary by clustering training examples with KNN in input space.",
"The resulting algorithm is relatively complex and computationally relatively expensive, but the authors provide detailed derivations and use arguments from proximal gradient descent methods to prove convergence (I did not follow all the derivations, only some).",
"In general the paper is well written and the authors explain the motivation behind the algorithms design in detail.",
"In the abstract the authors mention “extensive experimental results …”, but I find the experiments not very convincing: With experiments on the USPS handwritten digits dataset (why not MNIST?",
"), COIL-20 and COIL-100 and UCI, the datasets are all relatively small and the algorithm is run with dictionary sizes between p=100 to p=500.",
"This seems surprising because the authors state that they implemented SRSC in “CUDA C++ with extreme efficiency” (page 10).",
"But more importantly, I find it hard to interpret and compare the results: The paper reports accuracy and and normalized mutual information for a image retrieval / clustering task where the proposed SRSC is used as a feature extractor.",
"The improvements relative to standard Sparse Coding seem very small (often < 1% in terms of NMI; it looks more promising in terms of accuracy) and if I understand the description on page 11 correctly, than the test set was used to select some hyperparameters (the best similarity measure for clustering step)?",
"There is no comparisons to other baselines / state of the art image clustering methods.",
"Besides of providing features for a small scale image clustering system, are there maybe ways to more directly evaluate the properties and qualities of a sparse coding approach?",
"E.g.",
"reconstruction error / sparsity; maybe even denoising performance?",
"In summary, I think in it current form the paper lacks the evaluation and experimental results for an ICLR publication.",
"Intuitively, I agree with the authors that the proposed regularization is an interesting direction, but I don’t see experiments that directly show that the regularization has the desired effect; and the improvements in the clustering task where SRSC is used as a feature extractor are very modest."
] | {
"criticism": 6,
"example": 3,
"importance_and_relevance": 0,
"materials_and_methods": 12,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 6,
"suggestion_and_solution": 1,
"total": 14
} | 0.428571 | 0.214286 | 0 | 0.857143 | 0.142857 | 0.142857 | 0.428571 | 0.071429 | 14 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.285714 | 2.269932 | 0.015782 |
Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling | Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models. | This work offers a theoretical justification for reusing the input word embedding in the output projection layer. It does by proposing an additional loss that is designed to minimize the distance between the predictive distribution and an estimate of the true data distribution. This is a nice setup since it can effectively smooth over the labels given as input. However, the construction of the estimate of the true data distribution seems engineered to provide the weight tying justification in Eqs. 3.6 and 3.7. It is not obvious why the projection matrix L in Eq 3.6 (let*s rename it to L*) should be the same as that in Eq. 2.1. For example, L* could be obtained through word2vec embeddings trained on a large dataset or it could be learned as an additional set of parameters. In the case that L* is a new learned matrix, it seems the result in Eq 4.5 is to use an independent matrix for the output projection layer, as is usually done. The experimental results are good and provide support for the approximate derivation done in section 4, particularly the distance plots in figure 1. Minor comments: Third line in abstract: where model -> where the model Second line in section 7: into space -> into the space Shouldn*t the RHS in Eq 3.5 be sum ilde{y_{t,i}}(frac{hat{y}_t}{ ilde{y_{t,i}}} - e_i) ? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling\nRecurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work offers a theoretical justification for reusing the input word embedding in the output projection layer. It does by proposing an additional loss that is designed to minimize the distance between the predictive distribution and an estimate of the true data distribution. This is a nice setup since it can effectively smooth over the labels given as input. However, the construction of the estimate of the true data distribution seems engineered to provide the weight tying justification in Eqs. 3.6 and 3.7. It is not obvious why the projection matrix L in Eq 3.6 (let*s rename it to L*) should be the same as that in Eq. 2.1. For example, L* could be obtained through word2vec embeddings trained on a large dataset or it could be learned as an additional set of parameters. In the case that L* is a new learned matrix, it seems the result in Eq 4.5 is to use an independent matrix for the output projection layer, as is usually done. The experimental results are good and provide support for the approximate derivation done in section 4, particularly the distance plots in figure 1. Minor comments: Third line in abstract: where model -> where the model Second line in section 7: into space -> into the space Shouldn*t the RHS in Eq 3.5 be sum \tilde{y_{t,i}}(frac{hat{y}_t}{\tilde{y_{t,i}}} - e_i) ?",
"role": "assistant"
}
]
| [
"This work offers a theoretical justification for reusing the input word embedding in the output projection layer.",
"It does by proposing an additional loss that is designed to minimize the distance between the predictive distribution and an estimate of the true data distribution.",
"This is a nice setup since it can effectively smooth over the labels given as input.",
"However, the construction of the estimate of the true data distribution seems engineered to provide the weight tying justification in Eqs.",
"3.6 and 3.7.",
"It is not obvious why the projection matrix L in Eq 3.6 (let*s rename it to L*) should be the same as that in Eq.",
"2.1.",
"For example, L* could be obtained through word2vec embeddings trained on a large dataset or it could be learned as an additional set of parameters.",
"In the case that L* is a new learned matrix, it seems the result in Eq 4.5 is to use an independent matrix for the output projection layer, as is usually done.",
"The experimental results are good and provide support for the approximate derivation done in section 4, particularly the distance plots in figure 1.",
"Minor comments: Third line in abstract: where model -> where the model Second line in section 7: into space -> into the space Shouldn*t the RHS in Eq 3.5 be sum \tilde{y_{t,i}}(frac{hat{y}_t}{\tilde{y_{t,i}}} - e_i) ?"
] | {
"criticism": 1,
"example": 2,
"importance_and_relevance": 0,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 5,
"results_and_discussion": 4,
"suggestion_and_solution": 4,
"total": 11
} | 0.090909 | 0.181818 | 0 | 0.545455 | 0.181818 | 0.454545 | 0.363636 | 0.363636 | 11 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.181818 | 2.118688 | 0.06313 |
Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling | Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models. | This paper provides a theoretical framework for tying parameters between input word embeddings and output word representations in the softmax. Experiments on PTB shows significant improvement. The idea of sharing or tying weights between input and output word embeddings is not new (as noted by others in this thread), which I see as the main negative side of the paper. The proposed justification appears new to me though, and certainly interesting. I was concerned that results are only given on one dataset, PTB, which is now kind of old in that literature. I*m glad the authors tried at least one more dataset, and I think it would be nice to find a way to include these results in the paper if accepted. Have you considered using character or sub-word units in that context? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling\nRecurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper provides a theoretical framework for tying parameters between input word embeddings and output word representations in the softmax. Experiments on PTB shows significant improvement. The idea of sharing or tying weights between input and output word embeddings is not new (as noted by others in this thread), which I see as the main negative side of the paper. The proposed justification appears new to me though, and certainly interesting. I was concerned that results are only given on one dataset, PTB, which is now kind of old in that literature. I*m glad the authors tried at least one more dataset, and I think it would be nice to find a way to include these results in the paper if accepted. Have you considered using character or sub-word units in that context?",
"role": "assistant"
}
]
| [
"This paper provides a theoretical framework for tying parameters between input word embeddings and output word representations in the softmax.",
"Experiments on PTB shows significant improvement.",
"The idea of sharing or tying weights between input and output word embeddings is not new (as noted by others in this thread), which I see as the main negative side of the paper.",
"The proposed justification appears new to me though, and certainly interesting.",
"I was concerned that results are only given on one dataset, PTB, which is now kind of old in that literature.",
"I*m glad the authors tried at least one more dataset, and I think it would be nice to find a way to include these results in the paper if accepted.",
"Have you considered using character or sub-word units in that context?"
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 4,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 7
} | 0.428571 | 0 | 0.142857 | 0.714286 | 0.285714 | 0.571429 | 0.285714 | 0.142857 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.571429 | 2.003261 | 0.568167 |
Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | This is an interesting paper on how to handle reparameterization in VAEs when you have discrete variables. The idea is to introduce a smoothing transformation that is shared between the generative model and the recognition model (leading to cancellations). A second contribution is to introduce an RBM as the prior model P(z) and to use autoregressive connections in generative and recognition models. The whole package becomes a bit entangled and complex and it is hard to figure out what causes the claimed good performance. Experiments that study these contributions separately would have been nice. The framework does become a little complex but this should not be a problem if nice software is delivered that can be used in a plug and play mode. Overall, the paper is very rich with ideas so I think it would be a great contribution to the conference. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDiscrete Variational Autoencoders\nProbabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is an interesting paper on how to handle reparameterization in VAEs when you have discrete variables. The idea is to introduce a smoothing transformation that is shared between the generative model and the recognition model (leading to cancellations). A second contribution is to introduce an RBM as the prior model P(z) and to use autoregressive connections in generative and recognition models. The whole package becomes a bit entangled and complex and it is hard to figure out what causes the claimed good performance. Experiments that study these contributions separately would have been nice. The framework does become a little complex but this should not be a problem if nice software is delivered that can be used in a plug and play mode. Overall, the paper is very rich with ideas so I think it would be a great contribution to the conference.",
"role": "assistant"
}
]
| [
"This is an interesting paper on how to handle reparameterization in VAEs when you have discrete variables.",
"The idea is to introduce a smoothing transformation that is shared between the generative model and the recognition model (leading to cancellations).",
"A second contribution is to introduce an RBM as the prior model P(z) and to use autoregressive connections in generative and recognition models.",
"The whole package becomes a bit entangled and complex and it is hard to figure out what causes the claimed good performance.",
"Experiments that study these contributions separately would have been nice.",
"The framework does become a little complex but this should not be a problem if nice software is delivered that can be used in a plug and play mode.",
"Overall, the paper is very rich with ideas so I think it would be a great contribution to the conference."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 7
} | 0.142857 | 0 | 0.428571 | 0.714286 | 0.428571 | 0.142857 | 0 | 0.285714 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 2.142857 | 1.57469 | 0.568167 |
Learning Invariant Representations Of Planar Curves | We propose a metric learning framework for the construction of invariant geometric functions of planar curves for the Euclidean and Similarity group of transformations. We leverage on the representational power of convolutional neural networks to compute these geometric quantities. In comparison with axiomatic constructions, we show that the invariants approximated by the learning architectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. Finally, we develop a novel multi-scale representation in a similarity metric learning paradigm. | Pros : - New representation with nice properties that are derived and compared with a mathematical baseline and background - A simple algorithm to obtain the representation Cons : - The paper sounds like an applied maths paper, but further analysis on the nature of the representation could be done, for instance, by understanding the nature of each layer, or at least, the first. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Invariant Representations Of Planar Curves\nWe propose a metric learning framework for the construction of invariant geometric functions of planar curves for the Euclidean and Similarity group of transformations. We leverage on the representational power of convolutional neural networks to compute these geometric quantities. In comparison with axiomatic constructions, we show that the invariants approximated by the learning architectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. Finally, we develop a novel multi-scale representation in a similarity metric learning paradigm.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nPros : - New representation with nice properties that are derived and compared with a mathematical baseline and background - A simple algorithm to obtain the representation Cons : - The paper sounds like an applied maths paper, but further analysis on the nature of the representation could be done, for instance, by understanding the nature of each layer, or at least, the first.",
"role": "assistant"
}
]
| [
"Pros : - New representation with nice properties that are derived and compared with a mathematical baseline and background - A simple algorithm to obtain the representation Cons : - The paper sounds like an applied maths paper, but further analysis on the nature of the representation could be done, for instance, by understanding the nature of each layer, or at least, the first."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 1,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 1
} | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 3 | 0.72733 | 2.27267 |
Reasoning with Memory Augmented Neural Networks for Language Comprehension | Hypothesis testing is an important cognitive process that supports human reasoning. In this paper, we introduce a computational hypothesis testing approach based on memory augmented neural networks. Our approach involves a hypothesis testing loop that reconsiders and progressively refines a previously formed hypothesis in order to generate new hypotheses to test. We apply the proposed approach to language comprehension task by using Neural Semantic Encoders (NSE). Our NSE models achieve the state-of-the-art results showing an absolute improvement of 1.2% to 2.6% accuracy over previous results obtained by single and ensemble systems on standard machine comprehension benchmarks such as the Children*s Book Test (CBT) and Who-Did-What (WDW) news article datasets. | First I would like to apologize for the delay in reviewing. Summary : this work introduces a novel memory based artificial neural network for reading comprehension. Experiments show improvement on state of the art. The originality of the approach seems to be on the implementation of an iterative procedure with a loop testing that the current answer is the correct one. In order to get a better sense of the reason for improvement it would be interesting to have a complexity and/or a time analysis of the algorithm. I might be mistaken but I don*t see you reporting anything on the actual number of loops necessary in the reported experiments. The dataset description in section 2.2, should be moved to section 4 where the other datasets are described. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nReasoning with Memory Augmented Neural Networks for Language Comprehension\nHypothesis testing is an important cognitive process that supports human reasoning. In this paper, we introduce a computational hypothesis testing approach based on memory augmented neural networks. Our approach involves a hypothesis testing loop that reconsiders and progressively refines a previously formed hypothesis in order to generate new hypotheses to test. We apply the proposed approach to language comprehension task by using Neural Semantic Encoders (NSE). Our NSE models achieve the state-of-the-art results showing an absolute improvement of 1.2% to 2.6% accuracy over previous results obtained by single and ensemble systems on standard machine comprehension benchmarks such as the Children*s Book Test (CBT) and Who-Did-What (WDW) news article datasets.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nFirst I would like to apologize for the delay in reviewing. Summary : this work introduces a novel memory based artificial neural network for reading comprehension. Experiments show improvement on state of the art. The originality of the approach seems to be on the implementation of an iterative procedure with a loop testing that the current answer is the correct one. In order to get a better sense of the reason for improvement it would be interesting to have a complexity and/or a time analysis of the algorithm. I might be mistaken but I don*t see you reporting anything on the actual number of loops necessary in the reported experiments. The dataset description in section 2.2, should be moved to section 4 where the other datasets are described.",
"role": "assistant"
}
]
| [
"First I would like to apologize for the delay in reviewing.",
"Summary : this work introduces a novel memory based artificial neural network for reading comprehension.",
"Experiments show improvement on state of the art.",
"The originality of the approach seems to be on the implementation of an iterative procedure with a loop testing that the current answer is the correct one.",
"In order to get a better sense of the reason for improvement it would be interesting to have a complexity and/or a time analysis of the algorithm.",
"I might be mistaken but I don*t see you reporting anything on the actual number of loops necessary in the reported experiments.",
"The dataset description in section 2.2, should be moved to section 4 where the other datasets are described."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 7
} | 0.142857 | 0 | 0.285714 | 0.714286 | 0.142857 | 0.142857 | 0.285714 | 0.428571 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
Adversarial Machine Learning at Scale | Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model*s parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet. Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than single-step attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a ``label leaking** effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process. | This paper has two main contributions: (1) Applying adversarial training to imagenet, a larger dataset than previously considered (2) Comparing different adversarial training approaches, focusing importantly on the transferability of different methods. The authors also uncover and explain the label leaking effect which is an important contribution. This paper is clear, well written and does a good job of assessing and comparing adversarial training methods and understanding their relation to one another. A wide range of empirical results are shown which helps elucidate the adversarial training procedure. This paper makes an important contribution towards understand adversarial training and believe ICLR is an appropriate venue for this work. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdversarial Machine Learning at Scale\nAdversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model*s parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet. Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than single-step attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a ``label leaking** effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper has two main contributions: (1) Applying adversarial training to imagenet, a larger dataset than previously considered (2) Comparing different adversarial training approaches, focusing importantly on the transferability of different methods. The authors also uncover and explain the label leaking effect which is an important contribution. This paper is clear, well written and does a good job of assessing and comparing adversarial training methods and understanding their relation to one another. A wide range of empirical results are shown which helps elucidate the adversarial training procedure. This paper makes an important contribution towards understand adversarial training and believe ICLR is an appropriate venue for this work.",
"role": "assistant"
}
]
| [
"This paper has two main contributions: (1) Applying adversarial training to imagenet, a larger dataset than previously considered (2) Comparing different adversarial training approaches, focusing importantly on the transferability of different methods.",
"The authors also uncover and explain the label leaking effect which is an important contribution.",
"This paper is clear, well written and does a good job of assessing and comparing adversarial training methods and understanding their relation to one another.",
"A wide range of empirical results are shown which helps elucidate the adversarial training procedure.",
"This paper makes an important contribution towards understand adversarial training and believe ICLR is an appropriate venue for this work."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 3,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 5
} | 0 | 0 | 0.8 | 0.6 | 0.6 | 0.2 | 0.4 | 0 | 5 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.6 | 1.589924 | 1.010076 |
Adversarial Machine Learning at Scale | Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model*s parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet. Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than single-step attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a ``label leaking** effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process. | This paper investigate the phenomenon of the adversarial examples and the adversarial training on the dataset of ImageNet. While the final conclusions are still vague, this paper raises several noteworthy finding from its experiments. The paper is well written and easy to follow. Although I still have some concerns about the paper (see the comments below), this paper has good contributions and worth to publish. Pros: For the first time in the literature, this paper proposed the concept of ‘label leaking’. Although its effect only becomes significant when the dataset is large, it should be carefully handled in the future research works along this line. Using the ratio of *clean accuracy* over ‘adversarial accuracy’ as the measure of robust is more reasonable compared to the existing works in the literature. Cons: Although the conclusions of the paper are based on the experiments on ImageNet, the title of the paper seems a little misleading. I consider Section 4 as the main contribution of the paper. Note that Section 4.3 and Section 4.4 are not specific to large-scale dataset, thus emphasizing the ‘large-scale’ in the title and in the introduction seems improper. Basically all the conclusions of the paper are made based on observing the experimental results. Further tests should have been performed to verify these hypotheses. Without that, the conclusions of the paper seems rushy. For example, one dataset of imageNet can not infer the conclusions for all large-scale datasets. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdversarial Machine Learning at Scale\nAdversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model*s parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet. Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than single-step attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a ``label leaking** effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper investigate the phenomenon of the adversarial examples and the adversarial training on the dataset of ImageNet. While the final conclusions are still vague, this paper raises several noteworthy finding from its experiments. The paper is well written and easy to follow. Although I still have some concerns about the paper (see the comments below), this paper has good contributions and worth to publish. Pros: For the first time in the literature, this paper proposed the concept of ‘label leaking’. Although its effect only becomes significant when the dataset is large, it should be carefully handled in the future research works along this line. Using the ratio of *clean accuracy* over ‘adversarial accuracy’ as the measure of robust is more reasonable compared to the existing works in the literature. Cons: Although the conclusions of the paper are based on the experiments on ImageNet, the title of the paper seems a little misleading. I consider Section 4 as the main contribution of the paper. Note that Section 4.3 and Section 4.4 are not specific to large-scale dataset, thus emphasizing the ‘large-scale’ in the title and in the introduction seems improper. Basically all the conclusions of the paper are made based on observing the experimental results. Further tests should have been performed to verify these hypotheses. Without that, the conclusions of the paper seems rushy. For example, one dataset of imageNet can not infer the conclusions for all large-scale datasets.",
"role": "assistant"
}
]
| [
"This paper investigate the phenomenon of the adversarial examples and the adversarial training on the dataset of ImageNet.",
"While the final conclusions are still vague, this paper raises several noteworthy finding from its experiments.",
"The paper is well written and easy to follow.",
"Although I still have some concerns about the paper (see the comments below), this paper has good contributions and worth to publish.",
"Pros: For the first time in the literature, this paper proposed the concept of ‘label leaking’.",
"Although its effect only becomes significant when the dataset is large, it should be carefully handled in the future research works along this line.",
"Using the ratio of *clean accuracy* over ‘adversarial accuracy’ as the measure of robust is more reasonable compared to the existing works in the literature.",
"Cons: Although the conclusions of the paper are based on the experiments on ImageNet, the title of the paper seems a little misleading.",
"I consider Section 4 as the main contribution of the paper.",
"Note that Section 4.3 and Section 4.4 are not specific to large-scale dataset, thus emphasizing the ‘large-scale’ in the title and in the introduction seems improper.",
"Basically all the conclusions of the paper are made based on observing the experimental results.",
"Further tests should have been performed to verify these hypotheses.",
"Without that, the conclusions of the paper seems rushy.",
"For example, one dataset of imageNet can not infer the conclusions for all large-scale datasets."
] | {
"criticism": 4,
"example": 2,
"importance_and_relevance": 6,
"materials_and_methods": 8,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 6,
"suggestion_and_solution": 3,
"total": 14
} | 0.285714 | 0.142857 | 0.428571 | 0.571429 | 0.285714 | 0.214286 | 0.428571 | 0.214286 | 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.571429 | 2.555646 | 0.015782 |
Predicting Medications from Diagnostic Codes with Recurrent Neural Networks | It is a surprising fact that electronic medical records are failing at one of their primary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and that up to 25% of all active medications do not appear on the appropriate patient list. Manual efforts to maintain these lists involve a great deal of tedious human labor, which could be reduced by computational tools to suggest likely missing or incorrect medications on a patient’s list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a patient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predictions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of errors and omissions in the data, and the likelihood of models such as these to help correct them. | This is a well written, organized, and presented paper that I enjoyed reading. I commend the authors on their attention to the narrative and the explanations. While it did not present any new methodology or architecture, it instead addressed an important application of predicting the medications a patient is using, given the record of billing codes. The dataset they use is impressive and useful and, frankly, more interesting than the typical toy datasets in machine learning. That said, the investigation of those results was not as deep as I thought it should have been in an empirical/applications paper. Despite their focus on the application, I was encouraged to see the authors use cutting edge choices (eg Keras, adadelta, etc) in their architecture. A few points of criticism: -The numerical results are in my view too brief. Fig 4 is anecdotal, Fig 5 is essentially a negative result (tSNE is only in some places interpretable), so that leaves Table 1. I recognize there is only one dataset, but this does not offer a vast amount of empirical evidence and analysis that one might expect out of a paper with no major algorithmic/theoretical advances. To be clear I don*t think this is disqualifying or deeply concerning; I simply found it a bit underwhelming. - To be constructive, re the results I would recommend removing Fig 5 and replacing that with some more meaningful analysis of performance. I found Fig 5 to be mostly uninformative, other than as a negative result, which I think can be stated in a sentence rather than in a large figure. - There is a bit of jargon used and expertise required that may not be familiar to the typical ICLR reader. I saw that another reviewer suggested perhaps ICLR is not the right venue for this work. While I certainly see the reviewer*s point that a medical or healthcare venue may be more suitable, I do want to cast my vote of keeping this paper here... our community benefits from more thoughtful and in depth applications. Instead I think this can be addressed by tightening up those points of jargon and making the results more easy to evaluate by an ICLR reader (that is, as it stands now researchers without medical experience have to take your results after Table 1 on faith, rather than getting to apply their well-trained quantitative eye). Overall, a nice paper. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPredicting Medications from Diagnostic Codes with Recurrent Neural Networks\nIt is a surprising fact that electronic medical records are failing at one of their primary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and that up to 25% of all active medications do not appear on the appropriate patient list. Manual efforts to maintain these lists involve a great deal of tedious human labor, which could be reduced by computational tools to suggest likely missing or incorrect medications on a patient’s list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a patient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predictions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of errors and omissions in the data, and the likelihood of models such as these to help correct them.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is a well written, organized, and presented paper that I enjoyed reading. I commend the authors on their attention to the narrative and the explanations. While it did not present any new methodology or architecture, it instead addressed an important application of predicting the medications a patient is using, given the record of billing codes. The dataset they use is impressive and useful and, frankly, more interesting than the typical toy datasets in machine learning. That said, the investigation of those results was not as deep as I thought it should have been in an empirical/applications paper. Despite their focus on the application, I was encouraged to see the authors use cutting edge choices (eg Keras, adadelta, etc) in their architecture. A few points of criticism: -The numerical results are in my view too brief. Fig 4 is anecdotal, Fig 5 is essentially a negative result (tSNE is only in some places interpretable), so that leaves Table 1. I recognize there is only one dataset, but this does not offer a vast amount of empirical evidence and analysis that one might expect out of a paper with no major algorithmic/theoretical advances. To be clear I don*t think this is disqualifying or deeply concerning; I simply found it a bit underwhelming. - To be constructive, re the results I would recommend removing Fig 5 and replacing that with some more meaningful analysis of performance. I found Fig 5 to be mostly uninformative, other than as a negative result, which I think can be stated in a sentence rather than in a large figure. - There is a bit of jargon used and expertise required that may not be familiar to the typical ICLR reader. I saw that another reviewer suggested perhaps ICLR is not the right venue for this work. While I certainly see the reviewer*s point that a medical or healthcare venue may be more suitable, I do want to cast my vote of keeping this paper here... our community benefits from more thoughtful and in depth applications. Instead I think this can be addressed by tightening up those points of jargon and making the results more easy to evaluate by an ICLR reader (that is, as it stands now researchers without medical experience have to take your results after Table 1 on faith, rather than getting to apply their well-trained quantitative eye). Overall, a nice paper.",
"role": "assistant"
}
]
| [
"This is a well written, organized, and presented paper that I enjoyed reading.",
"I commend the authors on their attention to the narrative and the explanations.",
"While it did not present any new methodology or architecture, it instead addressed an important application of predicting the medications a patient is using, given the record of billing codes.",
"The dataset they use is impressive and useful and, frankly, more interesting than the typical toy datasets in machine learning.",
"That said, the investigation of those results was not as deep as I thought it should have been in an empirical/applications paper.",
"Despite their focus on the application, I was encouraged to see the authors use cutting edge choices (eg Keras, adadelta, etc) in their architecture.",
"A few points of criticism: -The numerical results are in my view too brief.",
"Fig 4 is anecdotal, Fig 5 is essentially a negative result (tSNE is only in some places interpretable), so that leaves Table 1.",
"I recognize there is only one dataset, but this does not offer a vast amount of empirical evidence and analysis that one might expect out of a paper with no major algorithmic/theoretical advances.",
"To be clear I don*t think this is disqualifying or deeply concerning; I simply found it a bit underwhelming.",
"- To be constructive, re the results I would recommend removing Fig 5 and replacing that with some more meaningful analysis of performance.",
"I found Fig 5 to be mostly uninformative, other than as a negative result, which I think can be stated in a sentence rather than in a large figure.",
"- There is a bit of jargon used and expertise required that may not be familiar to the typical ICLR reader.",
"I saw that another reviewer suggested perhaps ICLR is not the right venue for this work.",
"While I certainly see the reviewer*s point that a medical or healthcare venue may be more suitable, I do want to cast my vote of keeping this paper here... our community benefits from more thoughtful and in depth applications.",
"Instead I think this can be addressed by tightening up those points of jargon and making the results more easy to evaluate by an ICLR reader (that is, as it stands now researchers without medical experience have to take your results after Table 1 on faith, rather than getting to apply their well-trained quantitative eye).",
"Overall, a nice paper."
] | {
"criticism": 7,
"example": 3,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 5,
"presentation_and_reporting": 6,
"results_and_discussion": 6,
"suggestion_and_solution": 4,
"total": 17
} | 0.411765 | 0.176471 | 0.235294 | 0.411765 | 0.294118 | 0.352941 | 0.352941 | 0.235294 | 17 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.470588 | 2.218069 | 0.252519 |
Predicting Medications from Diagnostic Codes with Recurrent Neural Networks | It is a surprising fact that electronic medical records are failing at one of their primary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and that up to 25% of all active medications do not appear on the appropriate patient list. Manual efforts to maintain these lists involve a great deal of tedious human labor, which could be reduced by computational tools to suggest likely missing or incorrect medications on a patient’s list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a patient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predictions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of errors and omissions in the data, and the likelihood of models such as these to help correct them. | This is a well-conducted and well-written study on the prediction of medication from diagnostic codes. The authors compared GRUs, LSTMs, feed-forward networks and random forests (making a case for why random forests should be used, instead of SVMs) and analysed the predictions and embeddings. The authors also did address the questions of the reviewers. My only negative point is that this work might be more relevant for a data science or medical venue rather than at ICLR. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPredicting Medications from Diagnostic Codes with Recurrent Neural Networks\nIt is a surprising fact that electronic medical records are failing at one of their primary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and that up to 25% of all active medications do not appear on the appropriate patient list. Manual efforts to maintain these lists involve a great deal of tedious human labor, which could be reduced by computational tools to suggest likely missing or incorrect medications on a patient’s list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a patient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predictions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of errors and omissions in the data, and the likelihood of models such as these to help correct them.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis is a well-conducted and well-written study on the prediction of medication from diagnostic codes. The authors compared GRUs, LSTMs, feed-forward networks and random forests (making a case for why random forests should be used, instead of SVMs) and analysed the predictions and embeddings. The authors also did address the questions of the reviewers. My only negative point is that this work might be more relevant for a data science or medical venue rather than at ICLR.",
"role": "assistant"
}
]
| [
"This is a well-conducted and well-written study on the prediction of medication from diagnostic codes.",
"The authors compared GRUs, LSTMs, feed-forward networks and random forests (making a case for why random forests should be used, instead of SVMs) and analysed the predictions and embeddings.",
"The authors also did address the questions of the reviewers.",
"My only negative point is that this work might be more relevant for a data science or medical venue rather than at ICLR."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 4
} | 0 | 0 | 0.5 | 0.75 | 0.25 | 0.25 | 0.25 | 0.25 | 4 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 0.971623 | 1.278377 |
Inductive Bias of Deep Convolutional Networks through Pooling Geometry | Our formal understanding of the inductive bias that drives the success of convolutional networks on computer vision tasks is limited. In particular, it is unclear what makes hypotheses spaces born from convolution and pooling operations so suitable for natural images. In this paper we study the ability of convolutional networks to model correlations among regions of their input. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on other types of convolutional networks as well. Correlations are formalized through the notion of separation rank, which for a given partition of the input, measures how far a function is from being separable. We show that a polynomially sized deep network supports exponentially high separation ranks for certain input partitions, while being limited to polynomial separation ranks for others. The network*s pooling geometry effectively determines which input partitions are favored, thus serves as a means for controlling the inductive bias. Contiguous pooling windows as commonly employed in practice favor interleaved partitions over coarse ones, orienting the inductive bias towards the statistics of natural images. Other pooling schemes lead to different preferences, and this allows tailoring the network to data that departs from the usual domain of natural imagery. In addition to analyzing deep networks, we show that shallow ones support only linear separation ranks, and by this gain insight into the benefit of functions brought forth by depth - they are able to efficiently model strong correlation under favored partitions of the input. | This paper investigates the fact why deep networks perform well in practice and how modifying the geometry of pooling can make the polynomially sized deep network to provide a function with exponentially high separation rank (for certain partitioning.) In the authors* previous works, they showed the superiority of deep networks over shallows when the activation function is ReLu and the pooling is max/mean pooling but in the current paper there is no activation function after conv and the pooling is just a multiplication of the node values. Although for the experimental results they*ve considered both scenarios. Actually, the general reasoning for this problem is hard, therefore, this drawback is not significant and the current contribution adds a reasonable amount of knowledge to the literature. This paper studies the convolutional arithmetic circuits and shows how this model can address the inductive biases and how pooling can adjust these biases. This interesting contribution gives an intuition about how deep network can capture the correlation between the input variables when its size is polynomial but and correlation is exponential. It worth to note that although the authors tried to express their notation and definitions carefully where they were very successful, it would be helpful if they elaborate a bit more on their definitions, expressions, and conclusions in the sense to make them more accessible. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInductive Bias of Deep Convolutional Networks through Pooling Geometry\nOur formal understanding of the inductive bias that drives the success of convolutional networks on computer vision tasks is limited. In particular, it is unclear what makes hypotheses spaces born from convolution and pooling operations so suitable for natural images. In this paper we study the ability of convolutional networks to model correlations among regions of their input. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on other types of convolutional networks as well. Correlations are formalized through the notion of separation rank, which for a given partition of the input, measures how far a function is from being separable. We show that a polynomially sized deep network supports exponentially high separation ranks for certain input partitions, while being limited to polynomial separation ranks for others. The network*s pooling geometry effectively determines which input partitions are favored, thus serves as a means for controlling the inductive bias. Contiguous pooling windows as commonly employed in practice favor interleaved partitions over coarse ones, orienting the inductive bias towards the statistics of natural images. Other pooling schemes lead to different preferences, and this allows tailoring the network to data that departs from the usual domain of natural imagery. In addition to analyzing deep networks, we show that shallow ones support only linear separation ranks, and by this gain insight into the benefit of functions brought forth by depth - they are able to efficiently model strong correlation under favored partitions of the input.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper investigates the fact why deep networks perform well in practice and how modifying the geometry of pooling can make the polynomially sized deep network to provide a function with exponentially high separation rank (for certain partitioning.) In the authors* previous works, they showed the superiority of deep networks over shallows when the activation function is ReLu and the pooling is max/mean pooling but in the current paper there is no activation function after conv and the pooling is just a multiplication of the node values. Although for the experimental results they*ve considered both scenarios. Actually, the general reasoning for this problem is hard, therefore, this drawback is not significant and the current contribution adds a reasonable amount of knowledge to the literature. This paper studies the convolutional arithmetic circuits and shows how this model can address the inductive biases and how pooling can adjust these biases. This interesting contribution gives an intuition about how deep network can capture the correlation between the input variables when its size is polynomial but and correlation is exponential. It worth to note that although the authors tried to express their notation and definitions carefully where they were very successful, it would be helpful if they elaborate a bit more on their definitions, expressions, and conclusions in the sense to make them more accessible.",
"role": "assistant"
}
]
| [
"This paper investigates the fact why deep networks perform well in practice and how modifying the geometry of pooling can make the polynomially sized deep network to provide a function with exponentially high separation rank (for certain partitioning.)",
"In the authors* previous works, they showed the superiority of deep networks over shallows when the activation function is ReLu and the pooling is max/mean pooling but in the current paper there is no activation function after conv and the pooling is just a multiplication of the node values.",
"Although for the experimental results they*ve considered both scenarios.",
"Actually, the general reasoning for this problem is hard, therefore, this drawback is not significant and the current contribution adds a reasonable amount of knowledge to the literature.",
"This paper studies the convolutional arithmetic circuits and shows how this model can address the inductive biases and how pooling can adjust these biases.",
"This interesting contribution gives an intuition about how deep network can capture the correlation between the input variables when its size is polynomial but and correlation is exponential.",
"It worth to note that although the authors tried to express their notation and definitions carefully where they were very successful, it would be helpful if they elaborate a bit more on their definitions, expressions, and conclusions in the sense to make them more accessible."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 7
} | 0.142857 | 0 | 0.285714 | 0.714286 | 0.285714 | 0.142857 | 0.571429 | 0.142857 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.285714 | 1.717547 | 0.568167 |
Inductive Bias of Deep Convolutional Networks through Pooling Geometry | Our formal understanding of the inductive bias that drives the success of convolutional networks on computer vision tasks is limited. In particular, it is unclear what makes hypotheses spaces born from convolution and pooling operations so suitable for natural images. In this paper we study the ability of convolutional networks to model correlations among regions of their input. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on other types of convolutional networks as well. Correlations are formalized through the notion of separation rank, which for a given partition of the input, measures how far a function is from being separable. We show that a polynomially sized deep network supports exponentially high separation ranks for certain input partitions, while being limited to polynomial separation ranks for others. The network*s pooling geometry effectively determines which input partitions are favored, thus serves as a means for controlling the inductive bias. Contiguous pooling windows as commonly employed in practice favor interleaved partitions over coarse ones, orienting the inductive bias towards the statistics of natural images. Other pooling schemes lead to different preferences, and this allows tailoring the network to data that departs from the usual domain of natural imagery. In addition to analyzing deep networks, we show that shallow ones support only linear separation ranks, and by this gain insight into the benefit of functions brought forth by depth - they are able to efficiently model strong correlation under favored partitions of the input. | This paper addresses the question of which functions are well suited to deep networks, as opposed to shallow networks. The basic intuition is convincing and fairly straightforward. Pooling operations bring together information. When information is correlated, it can be more efficiently used if the geometry of pooling regions matches the correlations so that it can be brought together more efficiently. Shallow networks without layers of localized pooling lack this mechanism to combine correlated information efficiently. The theoretical results are focused on convolutional arithmetic circuits, building on prior theoretical results of the authors. The results make use of the interesting technical notion of separability, which in some sense measures the degree to which a function can be represented as the composition of independent functions. Because separability is measured relative to a partition of the input, it is an appropriate mechanism for measuring the complexity of functions relative to a particular geometry of pooling operations. Many of the technical notions are pretty intuitive, although the tensor analysis is pretty terse and not easy to follow without knowledge of the authors’ prior work. In some sense the comparison between deep and shallow networks is somewhat misleading, since the shallow networks lack a hierarchical pooling structure. For example, a shallow convolutional network with RELU and max pooling does not really make sense, since the max occurs over the whole image. So it seems that the paper is really more of an analysis of the effect of pooling vs. not having pooling. For example, it is not clear that a deep CNN without pooling would be any more efficient than a shallow network, from this work. It is not clear how much the theoretical results depend on the use of a model with product pooling, and how they might be extended to the more common max pooling. Even if theoretical results are difficult to derive in this case, simple illustrative examples might be helpful. In fact, if the authors prepare a longer version of the paper for a journal I think the results could be made more intuitive if they could add a simple toy example of a function that can be efficiently represented with a convolutional arithmetic circuit when the pooling structure fits the correlations, and perhaps showing also how this could be represented with a convolutional network with RELU and max pooling. I would also appreciate a more explicit discussion of how the depth of a deep network affects the separability of functions that can be represented. A shallow network doesn’t have local pooling, so the difference between deep and shallow if perhaps mostly one of pooling vs. not pooling. However, practitioners find that very deep networks seem to be more effective than “deep” networks with only a few convolutional layers and pooling. The paper does not explicitly discuss whether their results provide insight into this behavior. Overall, I think that the paper attacks an important problem in an interesting way. It is not so convincing that this really gets to the heart of why depth is so important, because of the theoretical limitation to arithmetic circuits, and because the comparison is to shallow networks that are without localized pooling. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInductive Bias of Deep Convolutional Networks through Pooling Geometry\nOur formal understanding of the inductive bias that drives the success of convolutional networks on computer vision tasks is limited. In particular, it is unclear what makes hypotheses spaces born from convolution and pooling operations so suitable for natural images. In this paper we study the ability of convolutional networks to model correlations among regions of their input. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on other types of convolutional networks as well. Correlations are formalized through the notion of separation rank, which for a given partition of the input, measures how far a function is from being separable. We show that a polynomially sized deep network supports exponentially high separation ranks for certain input partitions, while being limited to polynomial separation ranks for others. The network*s pooling geometry effectively determines which input partitions are favored, thus serves as a means for controlling the inductive bias. Contiguous pooling windows as commonly employed in practice favor interleaved partitions over coarse ones, orienting the inductive bias towards the statistics of natural images. Other pooling schemes lead to different preferences, and this allows tailoring the network to data that departs from the usual domain of natural imagery. In addition to analyzing deep networks, we show that shallow ones support only linear separation ranks, and by this gain insight into the benefit of functions brought forth by depth - they are able to efficiently model strong correlation under favored partitions of the input.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper addresses the question of which functions are well suited to deep networks, as opposed to shallow networks. The basic intuition is convincing and fairly straightforward. Pooling operations bring together information. When information is correlated, it can be more efficiently used if the geometry of pooling regions matches the correlations so that it can be brought together more efficiently. Shallow networks without layers of localized pooling lack this mechanism to combine correlated information efficiently. The theoretical results are focused on convolutional arithmetic circuits, building on prior theoretical results of the authors. The results make use of the interesting technical notion of separability, which in some sense measures the degree to which a function can be represented as the composition of independent functions. Because separability is measured relative to a partition of the input, it is an appropriate mechanism for measuring the complexity of functions relative to a particular geometry of pooling operations. Many of the technical notions are pretty intuitive, although the tensor analysis is pretty terse and not easy to follow without knowledge of the authors’ prior work. In some sense the comparison between deep and shallow networks is somewhat misleading, since the shallow networks lack a hierarchical pooling structure. For example, a shallow convolutional network with RELU and max pooling does not really make sense, since the max occurs over the whole image. So it seems that the paper is really more of an analysis of the effect of pooling vs. not having pooling. For example, it is not clear that a deep CNN without pooling would be any more efficient than a shallow network, from this work. It is not clear how much the theoretical results depend on the use of a model with product pooling, and how they might be extended to the more common max pooling. Even if theoretical results are difficult to derive in this case, simple illustrative examples might be helpful. In fact, if the authors prepare a longer version of the paper for a journal I think the results could be made more intuitive if they could add a simple toy example of a function that can be efficiently represented with a convolutional arithmetic circuit when the pooling structure fits the correlations, and perhaps showing also how this could be represented with a convolutional network with RELU and max pooling. I would also appreciate a more explicit discussion of how the depth of a deep network affects the separability of functions that can be represented. A shallow network doesn’t have local pooling, so the difference between deep and shallow if perhaps mostly one of pooling vs. not pooling. However, practitioners find that very deep networks seem to be more effective than “deep” networks with only a few convolutional layers and pooling. The paper does not explicitly discuss whether their results provide insight into this behavior. Overall, I think that the paper attacks an important problem in an interesting way. It is not so convincing that this really gets to the heart of why depth is so important, because of the theoretical limitation to arithmetic circuits, and because the comparison is to shallow networks that are without localized pooling.",
"role": "assistant"
}
]
| [
"This paper addresses the question of which functions are well suited to deep networks, as opposed to shallow networks.",
"The basic intuition is convincing and fairly straightforward.",
"Pooling operations bring together information.",
"When information is correlated, it can be more efficiently used if the geometry of pooling regions matches the correlations so that it can be brought together more efficiently.",
"Shallow networks without layers of localized pooling lack this mechanism to combine correlated information efficiently.",
"The theoretical results are focused on convolutional arithmetic circuits, building on prior theoretical results of the authors.",
"The results make use of the interesting technical notion of separability, which in some sense measures the degree to which a function can be represented as the composition of independent functions.",
"Because separability is measured relative to a partition of the input, it is an appropriate mechanism for measuring the complexity of functions relative to a particular geometry of pooling operations.",
"Many of the technical notions are pretty intuitive, although the tensor analysis is pretty terse and not easy to follow without knowledge of the authors’ prior work.",
"In some sense the comparison between deep and shallow networks is somewhat misleading, since the shallow networks lack a hierarchical pooling structure.",
"For example, a shallow convolutional network with RELU and max pooling does not really make sense, since the max occurs over the whole image.",
"So it seems that the paper is really more of an analysis of the effect of pooling vs. not having pooling.",
"For example, it is not clear that a deep CNN without pooling would be any more efficient than a shallow network, from this work.",
"It is not clear how much the theoretical results depend on the use of a model with product pooling, and how they might be extended to the more common max pooling.",
"Even if theoretical results are difficult to derive in this case, simple illustrative examples might be helpful.",
"In fact, if the authors prepare a longer version of the paper for a journal I think the results could be made more intuitive if they could add a simple toy example of a function that can be efficiently represented with a convolutional arithmetic circuit when the pooling structure fits the correlations, and perhaps showing also how this could be represented with a convolutional network with RELU and max pooling.",
"I would also appreciate a more explicit discussion of how the depth of a deep network affects the separability of functions that can be represented.",
"A shallow network doesn’t have local pooling, so the difference between deep and shallow if perhaps mostly one of pooling vs. not pooling.",
"However, practitioners find that very deep networks seem to be more effective than “deep” networks with only a few convolutional layers and pooling.",
"The paper does not explicitly discuss whether their results provide insight into this behavior.",
"Overall, I think that the paper attacks an important problem in an interesting way.",
"It is not so convincing that this really gets to the heart of why depth is so important, because of the theoretical limitation to arithmetic circuits, and because the comparison is to shallow networks that are without localized pooling."
] | {
"criticism": 8,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 13,
"praise": 5,
"presentation_and_reporting": 3,
"results_and_discussion": 10,
"suggestion_and_solution": 4,
"total": 22
} | 0.363636 | 0.045455 | 0.181818 | 0.590909 | 0.227273 | 0.136364 | 0.454545 | 0.181818 | 22 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.181818 | 0.903441 | 1.278377 |
Semi-Supervised Classification with Graph Convolutional Networks | We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin. | The paper develops a simple and reasonable algorithm for graph node prediction/classification. The formulations are very intuitive and lead to a simple CNN based training and can easily leverage existing GPU speedups. Experiments are thorough and compare with many reasonable baselines on large and real benchmark datasets. Although, I am not quite aware of the literature on other methods and there may be similar alternatives as link and node prediction is an old problem. I still think the approach is quite simple and reasonably supported by good evaluations. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemi-Supervised Classification with Graph Convolutional Networks\nWe present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper develops a simple and reasonable algorithm for graph node prediction/classification. The formulations are very intuitive and lead to a simple CNN based training and can easily leverage existing GPU speedups. Experiments are thorough and compare with many reasonable baselines on large and real benchmark datasets. Although, I am not quite aware of the literature on other methods and there may be similar alternatives as link and node prediction is an old problem. I still think the approach is quite simple and reasonably supported by good evaluations.",
"role": "assistant"
}
]
| [
"The paper develops a simple and reasonable algorithm for graph node prediction/classification.",
"The formulations are very intuitive and lead to a simple CNN based training and can easily leverage existing GPU speedups.",
"Experiments are thorough and compare with many reasonable baselines on large and real benchmark datasets.",
"Although, I am not quite aware of the literature on other methods and there may be similar alternatives as link and node prediction is an old problem.",
"I still think the approach is quite simple and reasonably supported by good evaluations."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 4,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 5
} | 0.2 | 0 | 0.2 | 1 | 0.8 | 0 | 0 | 0 | 5 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 2.2 | 1.189924 | 1.010076 |
Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks | Recently deep neural networks have received considerable attention due to their ability to extract and represent high-level abstractions in data sets. Deep neural networks such as fully-connected and convolutional neural networks have shown excellent performance on a wide range of recognition and classification tasks. However, their hardware implementations currently suffer from large silicon area and high power consumption due to the their high degree of complexity. The power/energy consumption of neural networks is dominated by memory accesses, the majority of which occur in fully-connected networks. In fact, they contain most of the deep neural network parameters. In this paper, we propose sparsely-connected networks, by showing that the number of connections in fully-connected networks can be reduced by up to 90% while improving the accuracy performance on three popular datasets (MNIST, CIFAR10 and SVHN). We then propose an efficient hardware architecture based on linear-feedback shift registers to reduce the memory requirements of the proposed sparsely-connected networks. The proposed architecture can save up to 90% of memory compared to the conventional implementations of fully-connected neural networks. Moreover, implementation results show up to 84% reduction in the energy consumption of a single neuron of the proposed sparsely-connected networks compared to a single neuron of fully-connected neural networks. | Experimental results look reasonable, validated on 3 tasks. References could be improved, for example I would rather see Rumelhart*s paper cited for back-propagation than the Deep Learning book. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks\nRecently deep neural networks have received considerable attention due to their ability to extract and represent high-level abstractions in data sets. Deep neural networks such as fully-connected and convolutional neural networks have shown excellent performance on a wide range of recognition and classification tasks. However, their hardware implementations currently suffer from large silicon area and high power consumption due to the their high degree of complexity. The power/energy consumption of neural networks is dominated by memory accesses, the majority of which occur in fully-connected networks. In fact, they contain most of the deep neural network parameters. In this paper, we propose sparsely-connected networks, by showing that the number of connections in fully-connected networks can be reduced by up to 90% while improving the accuracy performance on three popular datasets (MNIST, CIFAR10 and SVHN). We then propose an efficient hardware architecture based on linear-feedback shift registers to reduce the memory requirements of the proposed sparsely-connected networks. The proposed architecture can save up to 90% of memory compared to the conventional implementations of fully-connected neural networks. Moreover, implementation results show up to 84% reduction in the energy consumption of a single neuron of the proposed sparsely-connected networks compared to a single neuron of fully-connected neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nExperimental results look reasonable, validated on 3 tasks. References could be improved, for example I would rather see Rumelhart*s paper cited for back-propagation than the Deep Learning book.",
"role": "assistant"
}
]
| [
"Experimental results look reasonable, validated on 3 tasks.",
"References could be improved, for example I would rather see Rumelhart*s paper cited for back-propagation than the Deep Learning book."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 1,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 2
} | 0 | 0 | 0 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 2 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.5 | 0.590326 | 1.909674 |
Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks | Recently deep neural networks have received considerable attention due to their ability to extract and represent high-level abstractions in data sets. Deep neural networks such as fully-connected and convolutional neural networks have shown excellent performance on a wide range of recognition and classification tasks. However, their hardware implementations currently suffer from large silicon area and high power consumption due to the their high degree of complexity. The power/energy consumption of neural networks is dominated by memory accesses, the majority of which occur in fully-connected networks. In fact, they contain most of the deep neural network parameters. In this paper, we propose sparsely-connected networks, by showing that the number of connections in fully-connected networks can be reduced by up to 90% while improving the accuracy performance on three popular datasets (MNIST, CIFAR10 and SVHN). We then propose an efficient hardware architecture based on linear-feedback shift registers to reduce the memory requirements of the proposed sparsely-connected networks. The proposed architecture can save up to 90% of memory compared to the conventional implementations of fully-connected neural networks. Moreover, implementation results show up to 84% reduction in the energy consumption of a single neuron of the proposed sparsely-connected networks compared to a single neuron of fully-connected neural networks. | From my original comments: The results looks good but the baselines proposed are quite bad. For instance in the table 2 *Misclassification rate for a 784-1024-1024-1024-10 * the result for the FC with floating point is 1.33%. Well far from what we can obtain from this topology, near to 0.8%. I would like to see *significant* compression levels on state of the art results or good baselines. I can get 0,6% with two FC hidden layers... In CIFAR-10 experiments, i do not understand why *Sparsely-Connected 90% + Single-Precision Floating-Point* is worse than *Sparsely-Connected 90% + BinaryConnect*. So it is better to use binary than float. Again i think that in the experiments the authors are not using all the techniques that can be easily applied to float but not to binary (gaussian noise or other regularizations). Therefore under my point of view the comparison between float and binary is not fair. This is a critic also for the original papers about binary and ternary precision. In fact with this convolutional network, floating (standard) precision we can get lower that 9% of error rate. Again bad baselines. ---- The authors reply still does not convince me. I still think that the same technique should be applied on more challenging scenarios. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks\nRecently deep neural networks have received considerable attention due to their ability to extract and represent high-level abstractions in data sets. Deep neural networks such as fully-connected and convolutional neural networks have shown excellent performance on a wide range of recognition and classification tasks. However, their hardware implementations currently suffer from large silicon area and high power consumption due to the their high degree of complexity. The power/energy consumption of neural networks is dominated by memory accesses, the majority of which occur in fully-connected networks. In fact, they contain most of the deep neural network parameters. In this paper, we propose sparsely-connected networks, by showing that the number of connections in fully-connected networks can be reduced by up to 90% while improving the accuracy performance on three popular datasets (MNIST, CIFAR10 and SVHN). We then propose an efficient hardware architecture based on linear-feedback shift registers to reduce the memory requirements of the proposed sparsely-connected networks. The proposed architecture can save up to 90% of memory compared to the conventional implementations of fully-connected neural networks. Moreover, implementation results show up to 84% reduction in the energy consumption of a single neuron of the proposed sparsely-connected networks compared to a single neuron of fully-connected neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nFrom my original comments: The results looks good but the baselines proposed are quite bad. For instance in the table 2 *Misclassification rate for a 784-1024-1024-1024-10 * the result for the FC with floating point is 1.33%. Well far from what we can obtain from this topology, near to 0.8%. I would like to see *significant* compression levels on state of the art results or good baselines. I can get 0,6% with two FC hidden layers... In CIFAR-10 experiments, i do not understand why *Sparsely-Connected 90% + Single-Precision Floating-Point* is worse than *Sparsely-Connected 90% + BinaryConnect*. So it is better to use binary than float. Again i think that in the experiments the authors are not using all the techniques that can be easily applied to float but not to binary (gaussian noise or other regularizations). Therefore under my point of view the comparison between float and binary is not fair. This is a critic also for the original papers about binary and ternary precision. In fact with this convolutional network, floating (standard) precision we can get lower that 9% of error rate. Again bad baselines. ---- The authors reply still does not convince me. I still think that the same technique should be applied on more challenging scenarios.",
"role": "assistant"
}
]
| [
"From my original comments: The results looks good but the baselines proposed are quite bad.",
"For instance in the table 2 *Misclassification rate for a 784-1024-1024-1024-10 * the result for the FC with floating point is 1.33%.",
"Well far from what we can obtain from this topology, near to 0.8%.",
"I would like to see *significant* compression levels on state of the art results or good baselines.",
"I can get 0,6% with two FC hidden layers...",
"In CIFAR-10 experiments, i do not understand why *Sparsely-Connected 90% + Single-Precision Floating-Point* is worse than *Sparsely-Connected 90% + BinaryConnect*.",
"So it is better to use binary than float.",
"Again i think that in the experiments the authors are not using all the techniques that can be easily applied to float but not to binary (gaussian noise or other regularizations).",
"Therefore under my point of view the comparison between float and binary is not fair.",
"This is a critic also for the original papers about binary and ternary precision.",
"In fact with this convolutional network, floating (standard) precision we can get lower that 9% of error rate.",
"Again bad baselines.",
"---- The authors reply still does not convince me.",
"I still think that the same technique should be applied on more challenging scenarios."
] | {
"criticism": 7,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 8,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 6,
"suggestion_and_solution": 3,
"total": 14
} | 0.5 | 0.071429 | 0 | 0.571429 | 0.142857 | 0.214286 | 0.428571 | 0.214286 | 14 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 2.127075 | 0.015782 |
Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word representations. We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself. The proposed method achieves state of the art results on multiple benchmark semi-supervised and purely supervised tasks. We provide visualizations and analysis showing that the learned word embeddings have improved in quality and that while training, the model is less prone to overfitting. | This paper applies the idea of the adversarial training and virtual adversarial training to the LSTM-based model in the text context. The paper is in general well written and easy to follow. Extending the idea of the adversarial training to the text tasks is simple but non-trivial. Overall the paper is worth to publish. I only have a minor comment: it is also interesting to see how much adversarial training can help in the performance of RNN, which is a simpler model and may be easier to analyze. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdversarial Training Methods for Semi-Supervised Text Classification\nAdversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word representations. We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself. The proposed method achieves state of the art results on multiple benchmark semi-supervised and purely supervised tasks. We provide visualizations and analysis showing that the learned word embeddings have improved in quality and that while training, the model is less prone to overfitting.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper applies the idea of the adversarial training and virtual adversarial training to the LSTM-based model in the text context. The paper is in general well written and easy to follow. Extending the idea of the adversarial training to the text tasks is simple but non-trivial. Overall the paper is worth to publish. I only have a minor comment: it is also interesting to see how much adversarial training can help in the performance of RNN, which is a simpler model and may be easier to analyze.",
"role": "assistant"
}
]
| [
"This paper applies the idea of the adversarial training and virtual adversarial training to the LSTM-based model in the text context.",
"The paper is in general well written and easy to follow.",
"Extending the idea of the adversarial training to the text tasks is simple but non-trivial.",
"Overall the paper is worth to publish.",
"I only have a minor comment: it is also interesting to see how much adversarial training can help in the performance of RNN, which is a simpler model and may be easier to analyze."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 5
} | 0.2 | 0 | 0.4 | 0.4 | 0.4 | 0.6 | 0 | 0.2 | 5 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 2.2 | 1.189924 | 1.010076 |
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks | There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations. | The authors present a methodology for analyzing sentence embedding techniques by checking how much the embeddings preserve information about sentence length, word content, and word order. They examine several popular embedding methods including autoencoding LSTMs, averaged word vectors, and skip-thought vectors. The experiments are thorough and provide interesting insights into the representational power of common sentence embedding strategies, such as the fact that word ordering is surprisingly low-entropy conditioned on word content. Exploring what sort of information is encoded in representation learning methods for NLP is an important and under-researched area. For example, the tide of word-embeddings research was mostly stemmed after a thread of careful experimental results showing most embeddings to be essentially equivalent, culminating in *Improving Distributional Similarity with Lessons Learned from Word Embeddings* by Levy, Goldberg, and Dagan. As representation learning becomes even more important in NLP this sort of research will be even more important. While this paper makes a valuable contribution in setting out and exploring a methodology for evaluating sentence embeddings, the evaluations themselves are quite simple and do not necessarily correlate with real-world desiderata for sentence embeddings (as the authors note in other comments, performance on these tasks is not a normative measure of embedding quality). For example, as the authors note, the ability of the averaged vector to encode sentence length is trivially to be expected given the central limit theorem (or more accurately, concentration inequalities like Hoeffding*s inequality). The word-order experiments were interesting. A relevant citation for this sort of conditional ordering procedure is *Generating Text with Recurrent Neural Networks* by Sutskever, Martens, and Hinton, who refer to the conversion of a bag of words into a sentence as *debagging.* Although this is just a first step in better understanding of sentence embeddings, it is an important one and I recommend this paper for publication. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks\nThere is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present a methodology for analyzing sentence embedding techniques by checking how much the embeddings preserve information about sentence length, word content, and word order. They examine several popular embedding methods including autoencoding LSTMs, averaged word vectors, and skip-thought vectors. The experiments are thorough and provide interesting insights into the representational power of common sentence embedding strategies, such as the fact that word ordering is surprisingly low-entropy conditioned on word content. Exploring what sort of information is encoded in representation learning methods for NLP is an important and under-researched area. For example, the tide of word-embeddings research was mostly stemmed after a thread of careful experimental results showing most embeddings to be essentially equivalent, culminating in *Improving Distributional Similarity with Lessons Learned from Word Embeddings* by Levy, Goldberg, and Dagan. As representation learning becomes even more important in NLP this sort of research will be even more important. While this paper makes a valuable contribution in setting out and exploring a methodology for evaluating sentence embeddings, the evaluations themselves are quite simple and do not necessarily correlate with real-world desiderata for sentence embeddings (as the authors note in other comments, performance on these tasks is not a normative measure of embedding quality). For example, as the authors note, the ability of the averaged vector to encode sentence length is trivially to be expected given the central limit theorem (or more accurately, concentration inequalities like Hoeffding*s inequality). The word-order experiments were interesting. A relevant citation for this sort of conditional ordering procedure is *Generating Text with Recurrent Neural Networks* by Sutskever, Martens, and Hinton, who refer to the conversion of a bag of words into a sentence as *debagging.* Although this is just a first step in better understanding of sentence embeddings, it is an important one and I recommend this paper for publication.",
"role": "assistant"
}
]
| [
"The authors present a methodology for analyzing sentence embedding techniques by checking how much the embeddings preserve information about sentence length, word content, and word order.",
"They examine several popular embedding methods including autoencoding LSTMs, averaged word vectors, and skip-thought vectors.",
"The experiments are thorough and provide interesting insights into the representational power of common sentence embedding strategies, such as the fact that word ordering is surprisingly low-entropy conditioned on word content.",
"Exploring what sort of information is encoded in representation learning methods for NLP is an important and under-researched area.",
"For example, the tide of word-embeddings research was mostly stemmed after a thread of careful experimental results showing most embeddings to be essentially equivalent, culminating in *Improving Distributional Similarity with Lessons Learned from Word Embeddings* by Levy, Goldberg, and Dagan.",
"As representation learning becomes even more important in NLP this sort of research will be even more important.",
"While this paper makes a valuable contribution in setting out and exploring a methodology for evaluating sentence embeddings, the evaluations themselves are quite simple and do not necessarily correlate with real-world desiderata for sentence embeddings (as the authors note in other comments, performance on these tasks is not a normative measure of embedding quality).",
"For example, as the authors note, the ability of the averaged vector to encode sentence length is trivially to be expected given the central limit theorem (or more accurately, concentration inequalities like Hoeffding*s inequality).",
"The word-order experiments were interesting.",
"A relevant citation for this sort of conditional ordering procedure is *Generating Text with Recurrent Neural Networks* by Sutskever, Martens, and Hinton, who refer to the conversion of a bag of words into a sentence as *debagging.",
"* Although this is just a first step in better understanding of sentence embeddings, it is an important one and I recommend this paper for publication."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 6,
"materials_and_methods": 8,
"praise": 5,
"presentation_and_reporting": 7,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 11
} | 0.090909 | 0 | 0.545455 | 0.727273 | 0.454545 | 0.636364 | 0.181818 | 0.090909 | 11 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.727273 | 2.664143 | 0.06313 |
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks | There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations. | This paper analyzes various unsupervised sentence embedding approaches by means of a set of auxiliary prediction tasks. By examining how well classifiers can predict word order, word content, and sentence length, the authors aim to assess how much and what type of information is captured by the different embedding models. The main focus is on a comparison between and encoder-decoder model (ED) and a permutation-invariant model, CBOW. (There is also an analysis of skip-thought vectors, but since it was trained on a different corpus it is hard to compare). There are several interesting and perhaps counter-intuitive results that emerge from this analysis and the authors do a nice job of examining those results and, for the most part, explaining them. However, I found the discussion of the word-order experiment rather unsatisfying. It seems to me that the appropriate question should have been something like, *How well does model X do compared to the theoretical upper bound which can be deduced from natural language statistics?* This is investigated from one angle in Section 7, but I would have preferred to the effect of natural language statistics discussed up front rather than presented as the explanation to a *surprising* observation. I had a similar reaction to the word-order experiments. Most of the interesting results, in my opinion, are about the ED model. It is fascinating that the LSTM encoder does not seem to rely on natural-language ordering statistics -- it seems like doing so should be a big win in terms of per-parameter expressivity. I also think that it*s strange that word content accuracy begins to drop for high-dimensional embeddings. I suppose this could be investigated by handicapping the decoder. Overall, this is a very nice paper investigating some aspects of the information content stored in various types of sentence embeddings. I recommend acceptance. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks\nThere is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper analyzes various unsupervised sentence embedding approaches by means of a set of auxiliary prediction tasks. By examining how well classifiers can predict word order, word content, and sentence length, the authors aim to assess how much and what type of information is captured by the different embedding models. The main focus is on a comparison between and encoder-decoder model (ED) and a permutation-invariant model, CBOW. (There is also an analysis of skip-thought vectors, but since it was trained on a different corpus it is hard to compare). There are several interesting and perhaps counter-intuitive results that emerge from this analysis and the authors do a nice job of examining those results and, for the most part, explaining them. However, I found the discussion of the word-order experiment rather unsatisfying. It seems to me that the appropriate question should have been something like, *How well does model X do compared to the theoretical upper bound which can be deduced from natural language statistics?* This is investigated from one angle in Section 7, but I would have preferred to the effect of natural language statistics discussed up front rather than presented as the explanation to a *surprising* observation. I had a similar reaction to the word-order experiments. Most of the interesting results, in my opinion, are about the ED model. It is fascinating that the LSTM encoder does not seem to rely on natural-language ordering statistics -- it seems like doing so should be a big win in terms of per-parameter expressivity. I also think that it*s strange that word content accuracy begins to drop for high-dimensional embeddings. I suppose this could be investigated by handicapping the decoder. Overall, this is a very nice paper investigating some aspects of the information content stored in various types of sentence embeddings. I recommend acceptance.",
"role": "assistant"
}
]
| [
"This paper analyzes various unsupervised sentence embedding approaches by means of a set of auxiliary prediction tasks.",
"By examining how well classifiers can predict word order, word content, and sentence length, the authors aim to assess how much and what type of information is captured by the different embedding models.",
"The main focus is on a comparison between and encoder-decoder model (ED) and a permutation-invariant model, CBOW.",
"(There is also an analysis of skip-thought vectors, but since it was trained on a different corpus it is hard to compare).",
"There are several interesting and perhaps counter-intuitive results that emerge from this analysis and the authors do a nice job of examining those results and, for the most part, explaining them.",
"However, I found the discussion of the word-order experiment rather unsatisfying.",
"It seems to me that the appropriate question should have been something like, *How well does model X do compared to the theoretical upper bound which can be deduced from natural language statistics?",
"* This is investigated from one angle in Section 7, but I would have preferred to the effect of natural language statistics discussed up front rather than presented as the explanation to a *surprising* observation.",
"I had a similar reaction to the word-order experiments.",
"Most of the interesting results, in my opinion, are about the ED model.",
"It is fascinating that the LSTM encoder does not seem to rely on natural-language ordering statistics -- it seems like doing so should be a big win in terms of per-parameter expressivity.",
"I also think that it*s strange that word content accuracy begins to drop for high-dimensional embeddings.",
"I suppose this could be investigated by handicapping the decoder.",
"Overall, this is a very nice paper investigating some aspects of the information content stored in various types of sentence embeddings.",
"I recommend acceptance."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 11,
"praise": 4,
"presentation_and_reporting": 8,
"results_and_discussion": 5,
"suggestion_and_solution": 4,
"total": 15
} | 0.2 | 0.066667 | 0.2 | 0.733333 | 0.266667 | 0.533333 | 0.333333 | 0.266667 | 15 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.6 | 2.53687 | 0.06313 |
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks | There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations. | This paper presents a set of experiments investigating what kinds of information are captured in common unsupervised approaches to sentence representation learning. The results are non-trivial and somewhat surprising. For example, they show that it is possible to reconstruct word order from bag of words representations, and they show that LSTM sentence autoencoders encode interpretable features even for randomly permuted nonsense sentences. Effective unsupervised sentence representation learning is an important and largely unsolved problem in NLP, and this kind of work seems like it should be straightforwardly helpful towards that end. In addition, the experimental paradigm presented here is likely more broadly applicable to a range of representation learning systems. Some of the results seem somewhat strange, but I see no major technical concerns, and think that that they are informative. I recommend acceptance. One minor red flag: - The massive drop in CBOW performance in Figures 1b and 4b are not explained, and seem implausible enough to warrant serious further investigation. Can you be absolutely certain that those results would appear with a different codebase and different random seed implementing the same model? Fortunately, this point is largely orthogonal to the major results of the paper. Two writing comments: - I agree that the results with word order and CBOW are surprising, but I think it*s slightly misleading to say that CBOW is predictive of word order. It doesn*t represent word order at all, but it*s possible to probabilistically reconstruct word order from the information that it does encode. - Saying that *LSTM auto-encoders are more effective at encoding word order than word content* doesn*t really make sense. These two quantities aren*t comparable. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks\nThere is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a set of experiments investigating what kinds of information are captured in common unsupervised approaches to sentence representation learning. The results are non-trivial and somewhat surprising. For example, they show that it is possible to reconstruct word order from bag of words representations, and they show that LSTM sentence autoencoders encode interpretable features even for randomly permuted nonsense sentences. Effective unsupervised sentence representation learning is an important and largely unsolved problem in NLP, and this kind of work seems like it should be straightforwardly helpful towards that end. In addition, the experimental paradigm presented here is likely more broadly applicable to a range of representation learning systems. Some of the results seem somewhat strange, but I see no major technical concerns, and think that that they are informative. I recommend acceptance. One minor red flag: - The massive drop in CBOW performance in Figures 1b and 4b are not explained, and seem implausible enough to warrant serious further investigation. Can you be absolutely certain that those results would appear with a different codebase and different random seed implementing the same model? Fortunately, this point is largely orthogonal to the major results of the paper. Two writing comments: - I agree that the results with word order and CBOW are surprising, but I think it*s slightly misleading to say that CBOW is predictive of word order. It doesn*t represent word order at all, but it*s possible to probabilistically reconstruct word order from the information that it does encode. - Saying that *LSTM auto-encoders are more effective at encoding word order than word content* doesn*t really make sense. These two quantities aren*t comparable.",
"role": "assistant"
}
]
| [
"This paper presents a set of experiments investigating what kinds of information are captured in common unsupervised approaches to sentence representation learning.",
"The results are non-trivial and somewhat surprising.",
"For example, they show that it is possible to reconstruct word order from bag of words representations, and they show that LSTM sentence autoencoders encode interpretable features even for randomly permuted nonsense sentences.",
"Effective unsupervised sentence representation learning is an important and largely unsolved problem in NLP, and this kind of work seems like it should be straightforwardly helpful towards that end.",
"In addition, the experimental paradigm presented here is likely more broadly applicable to a range of representation learning systems.",
"Some of the results seem somewhat strange, but I see no major technical concerns, and think that that they are informative.",
"I recommend acceptance.",
"One minor red flag: - The massive drop in CBOW performance in Figures 1b and 4b are not explained, and seem implausible enough to warrant serious further investigation.",
"Can you be absolutely certain that those results would appear with a different codebase and different random seed implementing the same model?",
"Fortunately, this point is largely orthogonal to the major results of the paper.",
"Two writing comments: - I agree that the results with word order and CBOW are surprising, but I think it*s slightly misleading to say that CBOW is predictive of word order.",
"It doesn*t represent word order at all, but it*s possible to probabilistically reconstruct word order from the information that it does encode.",
"- Saying that *LSTM auto-encoders are more effective at encoding word order than word content* doesn*t really make sense.",
"These two quantities aren*t comparable."
] | {
"criticism": 6,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 2,
"presentation_and_reporting": 6,
"results_and_discussion": 5,
"suggestion_and_solution": 3,
"total": 14
} | 0.428571 | 0.142857 | 0.142857 | 0.285714 | 0.142857 | 0.428571 | 0.357143 | 0.214286 | 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.142857 | 2.127075 | 0.015782 |
Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. Our pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and corpora we also introduce the freely available WikiText corpus. | This work is basically a combined pointer network applied on language modelling. The smart point is that this paper aims at language modelling with longer context, where a memory of seen words (especially the rare words) would be very useful for predicting the rest of the sentences. Hence, a combination of a pointer network and a standard language model would balance the copying seen words and predicting unseen words. Generally, such as the combined pointer networks applied in sentence compression, a vector representation of the source sequence would be used to compute the gate. This paper, instead, introduces a sentinel vector to carry out the mixture model, which is suitable in the case of language modelling. I would be interested in the variations of sentinel mixture implementation, though the current version has achieved very good results. In addition, the new WikiText language modelling dataset is very interesting. It probably can be a more standard dataset for evaluating the continuously-updated language model benchmarks than ptb dataset. Overall, this is a well-written paper. I recommend it to be accepted. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPointer Sentinel Mixture Models\nRecent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. Our pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and corpora we also introduce the freely available WikiText corpus.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work is basically a combined pointer network applied on language modelling. The smart point is that this paper aims at language modelling with longer context, where a memory of seen words (especially the rare words) would be very useful for predicting the rest of the sentences. Hence, a combination of a pointer network and a standard language model would balance the copying seen words and predicting unseen words. Generally, such as the combined pointer networks applied in sentence compression, a vector representation of the source sequence would be used to compute the gate. This paper, instead, introduces a sentinel vector to carry out the mixture model, which is suitable in the case of language modelling. I would be interested in the variations of sentinel mixture implementation, though the current version has achieved very good results. In addition, the new WikiText language modelling dataset is very interesting. It probably can be a more standard dataset for evaluating the continuously-updated language model benchmarks than ptb dataset. Overall, this is a well-written paper. I recommend it to be accepted.",
"role": "assistant"
}
]
| [
"This work is basically a combined pointer network applied on language modelling.",
"The smart point is that this paper aims at language modelling with longer context, where a memory of seen words (especially the rare words) would be very useful for predicting the rest of the sentences.",
"Hence, a combination of a pointer network and a standard language model would balance the copying seen words and predicting unseen words.",
"Generally, such as the combined pointer networks applied in sentence compression, a vector representation of the source sequence would be used to compute the gate.",
"This paper, instead, introduces a sentinel vector to carry out the mixture model, which is suitable in the case of language modelling.",
"I would be interested in the variations of sentinel mixture implementation, though the current version has achieved very good results.",
"In addition, the new WikiText language modelling dataset is very interesting.",
"It probably can be a more standard dataset for evaluating the continuously-updated language model benchmarks than ptb dataset.",
"Overall, this is a well-written paper.",
"I recommend it to be accepted."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 8,
"praise": 4,
"presentation_and_reporting": 7,
"results_and_discussion": 1,
"suggestion_and_solution": 4,
"total": 10
} | 0 | 0 | 0.2 | 0.8 | 0.4 | 0.7 | 0.1 | 0.4 | 10 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.6 | 2.457958 | 0.142042 |
Understanding Trainable Sparse Coding with Matrix Factorization | Sparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, such as the Iterative Soft Thresholding Algorithm and its accelerated version (ISTA, FISTA). These methods are optimal in the class of first-order methods for non-smooth, convex functions. However, they do not exploit the particular structure of the problem at hand nor the input data distribution. An acceleration using neural networks, coined LISTA, was proposed in cite{Gregor10}, which showed empirically that one could achieve high quality estimates with few iterations by modifying the parameters of the proximal splitting appropriately. In this paper we study the reasons for such acceleration. Our mathematical analysis reveals that it is related to a specific matrix factorization of the Gram kernel of the dictionary, which attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the ball. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys an improved convergence bound with respect to the non-adaptive version. Moreover, our analysis also shows that conditions for acceleration occur mostly at the beginning of the iterative process, consistent with numerical experiments. We further validate our analysis by showing that on dictionaries where this factorization does not exist, adaptive acceleration fails. | This work presents an analysis of LISTA, which originally proposes to accelerate sparse coding algorithms with some prior on the structure of the problem. The authors here propose a solid analysis of the acceleration performance of LISTA, using a specific matrix factorisation of the dictionary. The analysis is well structured, and provides interesting insights. It would have been good to tie more closely these insights to specific properties of data or input distributions. The learned dictionary results in Section 3.3 are not very clear: is the dictionary learned with a sort of alternating minimisation strategy that would include LISTA as sparse coding step? Or is it only the sparse coding that is studied, with a dictionary that has been learned a priori? Overall, the paper does not propose a new algorithm and representation, but provides key insights on a well-known and interesting acceleration method on sparse coding. This is quite a nice work. The title seems however a bit confusing as *neural sparse coding* is actually rather *LISTA*, or *neural network acceleration of sparse coding* - basically, it is not immediate to understand what *neural sparse coding* means... | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding Trainable Sparse Coding with Matrix Factorization\nSparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, such as the Iterative Soft Thresholding Algorithm and its accelerated version (ISTA, FISTA). These methods are optimal in the class of first-order methods for non-smooth, convex functions. However, they do not exploit the particular structure of the problem at hand nor the input data distribution. An acceleration using neural networks, coined LISTA, was proposed in cite{Gregor10}, which showed empirically that one could achieve high quality estimates with few iterations by modifying the parameters of the proximal splitting appropriately. In this paper we study the reasons for such acceleration. Our mathematical analysis reveals that it is related to a specific matrix factorization of the Gram kernel of the dictionary, which attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the ball. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys an improved convergence bound with respect to the non-adaptive version. Moreover, our analysis also shows that conditions for acceleration occur mostly at the beginning of the iterative process, consistent with numerical experiments. We further validate our analysis by showing that on dictionaries where this factorization does not exist, adaptive acceleration fails.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work presents an analysis of LISTA, which originally proposes to accelerate sparse coding algorithms with some prior on the structure of the problem. The authors here propose a solid analysis of the acceleration performance of LISTA, using a specific matrix factorisation of the dictionary. The analysis is well structured, and provides interesting insights. It would have been good to tie more closely these insights to specific properties of data or input distributions. The learned dictionary results in Section 3.3 are not very clear: is the dictionary learned with a sort of alternating minimisation strategy that would include LISTA as sparse coding step? Or is it only the sparse coding that is studied, with a dictionary that has been learned a priori? Overall, the paper does not propose a new algorithm and representation, but provides key insights on a well-known and interesting acceleration method on sparse coding. This is quite a nice work. The title seems however a bit confusing as *neural sparse coding* is actually rather *LISTA*, or *neural network acceleration of sparse coding* - basically, it is not immediate to understand what *neural sparse coding* means...",
"role": "assistant"
}
]
| [
"This work presents an analysis of LISTA, which originally proposes to accelerate sparse coding algorithms with some prior on the structure of the problem.",
"The authors here propose a solid analysis of the acceleration performance of LISTA, using a specific matrix factorisation of the dictionary.",
"The analysis is well structured, and provides interesting insights.",
"It would have been good to tie more closely these insights to specific properties of data or input distributions.",
"The learned dictionary results in Section 3.3 are not very clear: is the dictionary learned with a sort of alternating minimisation strategy that would include LISTA as sparse coding step?",
"Or is it only the sparse coding that is studied, with a dictionary that has been learned a priori?",
"Overall, the paper does not propose a new algorithm and representation, but provides key insights on a well-known and interesting acceleration method on sparse coding.",
"This is quite a nice work.",
"The title seems however a bit confusing as *neural sparse coding* is actually rather *LISTA*, or *neural network acceleration of sparse coding* - basically, it is not immediate to understand what *neural sparse coding* means..."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 9
} | 0.222222 | 0 | 0.444444 | 0.777778 | 0.333333 | 0.111111 | 0.111111 | 0.222222 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.222222 | 1.969703 | 0.252519 |
Tighter bounds lead to improved classifiers | The standard approach to supervised classification involves the minimization of a log-loss as an upper bound to the classification error. While this is a tight bound early on in the optimization, it overemphasizes the influence of incorrectly classified examples far from the decision boundary. Updating the upper bound during the optimization leads to improved classification rates while transforming the learning into a sequence of minimization problems. In addition, in the context where the classifier is part of a larger system, this modification makes it possible to link the performance of the classifier to that of the whole system, allowing the seamless introduction of external constraints. | The paper proposes new bounds on the misclassification error. The bounds lead to training classifiers with an adaptive loss function, and the algorithm operates in successive steps: the parameters are trained by minimizing the log-loss weighted by the probability of the observed class as given by the parameters of the previous steps. The bound improves on standard log-likelihood when outliers/underfitting prevents the learning algorithm to properly optimize the true classification error. Experiments are performed to confirm the therotical intuition and motivation. They show different cases where the new algorithm leads to improved classification error because underfitting occurs when using standard log-loss, and other cases where the new bounds do not lead to any improvement because the log-loss is sufficient to fit the dataset. The paper also discusses the relationship between the proposed idea and reinforcement learning, as well as with classifiers that have an *uncertain* label. While the paper is easy to read and well-written overall, in a second read I found it difficult to fully understand because two problems are somewhat mixed together (here considering only binary classification for simplicity): (a) the optimization of the classification error of a *randomized* classifier, which predicts 1 with probability P(1|x, theta), and (b) the optimization of the deterministic classifier, which predicts sign(P(1|x, theta) - 0.5), in a way that is robust to outliers/underfitting. The reason why I am confused is that *The standard approach to supervised classification*, as is mentioned in the abstract, is to use deterministic classifiers at test time, and the log-loss (up to constants) is an upper bound on the classification error of the deterministic classifier. However, the bounds discussed in the paper only concern the randomized classifier. === question: In the experiments, what kind of classifier is used? The randomized one (as would the sentence in the first page suggest *Assuming the class is chosen according to p(y|X, ?)*), or the more standard deterministic classifier argmax_y P(y|x, theta) ? As far as I can see, there are two cases: either (i) the paper deals with learning randomized classifiers, in which case it should compare the performances with the deterministic counterparts that people use in practice, or (ii) the paper makes sense as soon as we accept that the optimization of criterion (a) is a good surrogate for (b). In both cases, I think the write-up should be made clearer (because in case (ii) the algorithm does not minimize an upper bound on the classification error, and in case (i) what is done does not correspond to what is usually done in binary classification). === comments: - The section *allowing uncertainty in the decision* may be improved by adding some references, e.g. Bartlett & Wegkamp (2008) *Classification with a Reject Option using a Hinge Loss* or Sayedi et al. (2010) *Trading off Mistakes and Don’t Know Predictions*. - there seems to be a *-* sign missing in the P(1|x, theta) in L(theta, lambda) in Section 3. - The idea presented in the paper is interesting and original. While I give a relatively low score for now, I am willing to increase this score if the clarifications are made. Final comments: I think the paper is clear enough in its current form, even though there should still be improvement in the justification of why and to what extent the error of the randomized classifier is a good surrogate for the error of the true classifier. While the *smoothed* version of the 0/1 loss is an acceptable explanation in the standard classification setup, it is less clear in the section dealing with an additional *uncertain* label. I increase my score from 5 to 6. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTighter bounds lead to improved classifiers\nThe standard approach to supervised classification involves the minimization of a log-loss as an upper bound to the classification error. While this is a tight bound early on in the optimization, it overemphasizes the influence of incorrectly classified examples far from the decision boundary. Updating the upper bound during the optimization leads to improved classification rates while transforming the learning into a sequence of minimization problems. In addition, in the context where the classifier is part of a larger system, this modification makes it possible to link the performance of the classifier to that of the whole system, allowing the seamless introduction of external constraints.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes new bounds on the misclassification error. The bounds lead to training classifiers with an adaptive loss function, and the algorithm operates in successive steps: the parameters are trained by minimizing the log-loss weighted by the probability of the observed class as given by the parameters of the previous steps. The bound improves on standard log-likelihood when outliers/underfitting prevents the learning algorithm to properly optimize the true classification error. Experiments are performed to confirm the therotical intuition and motivation. They show different cases where the new algorithm leads to improved classification error because underfitting occurs when using standard log-loss, and other cases where the new bounds do not lead to any improvement because the log-loss is sufficient to fit the dataset. The paper also discusses the relationship between the proposed idea and reinforcement learning, as well as with classifiers that have an *uncertain* label. While the paper is easy to read and well-written overall, in a second read I found it difficult to fully understand because two problems are somewhat mixed together (here considering only binary classification for simplicity): (a) the optimization of the classification error of a *randomized* classifier, which predicts 1 with probability P(1|x, theta), and (b) the optimization of the deterministic classifier, which predicts sign(P(1|x, theta) - 0.5), in a way that is robust to outliers/underfitting. The reason why I am confused is that *The standard approach to supervised classification*, as is mentioned in the abstract, is to use deterministic classifiers at test time, and the log-loss (up to constants) is an upper bound on the classification error of the deterministic classifier. However, the bounds discussed in the paper only concern the randomized classifier. === question: In the experiments, what kind of classifier is used? The randomized one (as would the sentence in the first page suggest *Assuming the class is chosen according to p(y|X, ?)*), or the more standard deterministic classifier argmax_y P(y|x, theta) ? As far as I can see, there are two cases: either (i) the paper deals with learning randomized classifiers, in which case it should compare the performances with the deterministic counterparts that people use in practice, or (ii) the paper makes sense as soon as we accept that the optimization of criterion (a) is a good surrogate for (b). In both cases, I think the write-up should be made clearer (because in case (ii) the algorithm does not minimize an upper bound on the classification error, and in case (i) what is done does not correspond to what is usually done in binary classification). === comments: - The section *allowing uncertainty in the decision* may be improved by adding some references, e.g. Bartlett & Wegkamp (2008) *Classification with a Reject Option using a Hinge Loss* or Sayedi et al. (2010) *Trading off Mistakes and Don’t Know Predictions*. - there seems to be a *-* sign missing in the P(1|x, theta) in L(theta, lambda) in Section 3. - The idea presented in the paper is interesting and original. While I give a relatively low score for now, I am willing to increase this score if the clarifications are made. Final comments: I think the paper is clear enough in its current form, even though there should still be improvement in the justification of why and to what extent the error of the randomized classifier is a good surrogate for the error of the true classifier. While the *smoothed* version of the 0/1 loss is an acceptable explanation in the standard classification setup, it is less clear in the section dealing with an additional *uncertain* label. I increase my score from 5 to 6.",
"role": "assistant"
}
]
| [
"The paper proposes new bounds on the misclassification error.",
"The bounds lead to training classifiers with an adaptive loss function, and the algorithm operates in successive steps: the parameters are trained by minimizing the log-loss weighted by the probability of the observed class as given by the parameters of the previous steps.",
"The bound improves on standard log-likelihood when outliers/underfitting prevents the learning algorithm to properly optimize the true classification error.",
"Experiments are performed to confirm the therotical intuition and motivation.",
"They show different cases where the new algorithm leads to improved classification error because underfitting occurs when using standard log-loss, and other cases where the new bounds do not lead to any improvement because the log-loss is sufficient to fit the dataset.",
"The paper also discusses the relationship between the proposed idea and reinforcement learning, as well as with classifiers that have an *uncertain* label.",
"While the paper is easy to read and well-written overall, in a second read I found it difficult to fully understand because two problems are somewhat mixed together (here considering only binary classification for simplicity): (a) the optimization of the classification error of a *randomized* classifier, which predicts 1 with probability P(1|x, theta), and (b) the optimization of the deterministic classifier, which predicts sign(P(1|x, theta) - 0.5), in a way that is robust to outliers/underfitting.",
"The reason why I am confused is that *The standard approach to supervised classification*, as is mentioned in the abstract, is to use deterministic classifiers at test time, and the log-loss (up to constants) is an upper bound on the classification error of the deterministic classifier.",
"However, the bounds discussed in the paper only concern the randomized classifier.",
"=== question: In the experiments, what kind of classifier is used?",
"The randomized one (as would the sentence in the first page suggest *Assuming the class is chosen according to p(y|X, ?",
")*), or the more standard deterministic classifier argmax_y P(y|x, theta) ?",
"As far as I can see, there are two cases: either (i) the paper deals with learning randomized classifiers, in which case it should compare the performances with the deterministic counterparts that people use in practice, or (ii) the paper makes sense as soon as we accept that the optimization of criterion (a) is a good surrogate for (b).",
"In both cases, I think the write-up should be made clearer (because in case (ii) the algorithm does not minimize an upper bound on the classification error, and in case (i) what is done does not correspond to what is usually done in binary classification).",
"=== comments: - The section *allowing uncertainty in the decision* may be improved by adding some references, e.g.",
"Bartlett & Wegkamp (2008) *Classification with a Reject Option using a Hinge Loss* or Sayedi et al.",
"(2010) *Trading off Mistakes and Don’t Know Predictions*.",
"- there seems to be a *-* sign missing in the P(1|x, theta) in L(theta, lambda) in Section 3.",
"- The idea presented in the paper is interesting and original.",
"While I give a relatively low score for now, I am willing to increase this score if the clarifications are made.",
"Final comments: I think the paper is clear enough in its current form, even though there should still be improvement in the justification of why and to what extent the error of the randomized classifier is a good surrogate for the error of the true classifier.",
"While the *smoothed* version of the 0/1 loss is an acceptable explanation in the standard classification setup, it is less clear in the section dealing with an additional *uncertain* label.",
"I increase my score from 5 to 6."
] | {
"criticism": 4,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 14,
"praise": 4,
"presentation_and_reporting": 10,
"results_and_discussion": 9,
"suggestion_and_solution": 5,
"total": 23
} | 0.173913 | 0.086957 | 0.043478 | 0.608696 | 0.173913 | 0.434783 | 0.391304 | 0.217391 | 23 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.130435 | 0.552192 | 1.578243 |
HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving | Large computer-understandable proofs consist of millions of intermediate logical steps. The vast majority of such steps originate from manually selected and manually guided heuristics applied to intermediate goals. So far, machine learning has generally not been used to filter or generate these steps. In this paper, we introduce a new dataset based on Higher-Order Logic (HOL) proofs, for the purpose of developing new machine learning-based theorem-proving strategies. We make this dataset publicly available under the BSD license. We propose various machine learning tasks that can be performed on this dataset, and discuss their significance for theorem proving. We also benchmark a set of simple baseline machine learning models suited for the tasks (including logistic regression convolutional neural networks and recurrent neural networks). The results of our baseline models show the promise of applying machine learning to HOL theorem proving. | The authors present a dataset extraction method, dataset and first interesting results for machine-learning supported higher order logic theorem proving. The experimental results are impressively good for a first baseline and with an accuracy higher than 0.83 in relevance classification a lot better than chance, and encourage future research in this direction. The paper is well-written in terms of presentation and argumentation and leaves little room for criticism. The related work seems to be well-covered, though I have to note that I am not an expert for automated theorem proving. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving\nLarge computer-understandable proofs consist of millions of intermediate logical steps. The vast majority of such steps originate from manually selected and manually guided heuristics applied to intermediate goals. So far, machine learning has generally not been used to filter or generate these steps. In this paper, we introduce a new dataset based on Higher-Order Logic (HOL) proofs, for the purpose of developing new machine learning-based theorem-proving strategies. We make this dataset publicly available under the BSD license. We propose various machine learning tasks that can be performed on this dataset, and discuss their significance for theorem proving. We also benchmark a set of simple baseline machine learning models suited for the tasks (including logistic regression convolutional neural networks and recurrent neural networks). The results of our baseline models show the promise of applying machine learning to HOL theorem proving.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present a dataset extraction method, dataset and first interesting results for machine-learning supported higher order logic theorem proving. The experimental results are impressively good for a first baseline and with an accuracy higher than 0.83 in relevance classification a lot better than chance, and encourage future research in this direction. The paper is well-written in terms of presentation and argumentation and leaves little room for criticism. The related work seems to be well-covered, though I have to note that I am not an expert for automated theorem proving.",
"role": "assistant"
}
]
| [
"The authors present a dataset extraction method, dataset and first interesting results for machine-learning supported higher order logic theorem proving.",
"The experimental results are impressively good for a first baseline and with an accuracy higher than 0.83 in relevance classification a lot better than chance, and encourage future research in this direction.",
"The paper is well-written in terms of presentation and argumentation and leaves little room for criticism.",
"The related work seems to be well-covered, though I have to note that I am not an expert for automated theorem proving."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 2,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 4
} | 0.5 | 0 | 0.75 | 0.5 | 1 | 0.25 | 0.5 | 0 | 4 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 3.5 | 2.221623 | 1.278377 |
Hierarchical Multiscale Recurrent Neural Networks | Learning both hierarchical and temporal representation has been among the long- standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural network, that can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that the proposed model can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence generation. | The paper proposes a modified RNN architecture with multiple layers, where higher layers are only passed lower layer states if a FLUSH operation is predicted, consisting of passing up the state and reseting the lower layer*s state. In order to select one of three operations at each time step, the authors propose using the straight-through estimator with a slope-annealing trick during training. Empirical results and visualizations illustrate that the modified architecture performs well at boundary detection. Pros: - Paper is well-motivated, exceptionally well-composed - Provides promising initial results on learning hierarchical representations through visualizations and thorough experiments on language modeling and handwriting generation - The annealing trick with the straight-through estimator also seems potentially useful for other tasks containing discrete variables, and the trade-off in the flush operation is innovative. Cons: - In a couple cases the paper does not fully deliver. Empirical results on computational savings are not given, and hierarchy beyond a single level (where the data contains separators such as spaces and pen up/down) does not seem to be demonstrated. - It*s unclear whether better downstream performance is due to use of hierarchical information or due to the architecture changes acting as regularization, something which could hopefully be addressed. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHierarchical Multiscale Recurrent Neural Networks\nLearning both hierarchical and temporal representation has been among the long- standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural network, that can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that the proposed model can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence generation.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a modified RNN architecture with multiple layers, where higher layers are only passed lower layer states if a FLUSH operation is predicted, consisting of passing up the state and reseting the lower layer*s state. In order to select one of three operations at each time step, the authors propose using the straight-through estimator with a slope-annealing trick during training. Empirical results and visualizations illustrate that the modified architecture performs well at boundary detection. Pros: - Paper is well-motivated, exceptionally well-composed - Provides promising initial results on learning hierarchical representations through visualizations and thorough experiments on language modeling and handwriting generation - The annealing trick with the straight-through estimator also seems potentially useful for other tasks containing discrete variables, and the trade-off in the flush operation is innovative. Cons: - In a couple cases the paper does not fully deliver. Empirical results on computational savings are not given, and hierarchy beyond a single level (where the data contains separators such as spaces and pen up/down) does not seem to be demonstrated. - It*s unclear whether better downstream performance is due to use of hierarchical information or due to the architecture changes acting as regularization, something which could hopefully be addressed.",
"role": "assistant"
}
]
| [
"The paper proposes a modified RNN architecture with multiple layers, where higher layers are only passed lower layer states if a FLUSH operation is predicted, consisting of passing up the state and reseting the lower layer*s state.",
"In order to select one of three operations at each time step, the authors propose using the straight-through estimator with a slope-annealing trick during training.",
"Empirical results and visualizations illustrate that the modified architecture performs well at boundary detection.",
"Pros: - Paper is well-motivated, exceptionally well-composed - Provides promising initial results on learning hierarchical representations through visualizations and thorough experiments on language modeling and handwriting generation - The annealing trick with the straight-through estimator also seems potentially useful for other tasks containing discrete variables, and the trade-off in the flush operation is innovative.",
"Cons: - In a couple cases the paper does not fully deliver.",
"Empirical results on computational savings are not given, and hierarchy beyond a single level (where the data contains separators such as spaces and pen up/down) does not seem to be demonstrated.",
"- It*s unclear whether better downstream performance is due to use of hierarchical information or due to the architecture changes acting as regularization, something which could hopefully be addressed."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 7
} | 0.285714 | 0 | 0.142857 | 0.857143 | 0.285714 | 0 | 0.428571 | 0.285714 | 7 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.285714 | 1.717547 | 0.568167 |
Dropout with Expectation-linear Regularization | Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout’s training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an explicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently. | This paper puts forward a not entirely new, but also not sufficiently understood interpretation of dropout regularization. The authors derive useful theorems that estimate or put bounds on key quantities that are of interest when analyzing dropout regularized networks from their perspective. They furthermore introduce an explicit regularization term that should have a well understood impact on these key quantities. In the experimental section they convincingly show that the proposed regularization indeed has the expected effect and that their perspective on dropout is therefore useful and meaningful. Their proposed regularization also seems to have a positive impact on the models performance but they demonstrate this only on rel. small scale benchmark problems. I therefore don’t belief that this approach will have a large impact on how practitioner train models. But their general perspective is well aligned with the recently proposed idea of “Dropout as a bayesian approximation” and the insights and theorems in this paper might enable future work in that direction. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDropout with Expectation-linear Regularization\nDropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout’s training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an explicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper puts forward a not entirely new, but also not sufficiently understood interpretation of dropout regularization. The authors derive useful theorems that estimate or put bounds on key quantities that are of interest when analyzing dropout regularized networks from their perspective. They furthermore introduce an explicit regularization term that should have a well understood impact on these key quantities. In the experimental section they convincingly show that the proposed regularization indeed has the expected effect and that their perspective on dropout is therefore useful and meaningful. Their proposed regularization also seems to have a positive impact on the models performance but they demonstrate this only on rel. small scale benchmark problems. I therefore don’t belief that this approach will have a large impact on how practitioner train models. But their general perspective is well aligned with the recently proposed idea of “Dropout as a bayesian approximation” and the insights and theorems in this paper might enable future work in that direction.",
"role": "assistant"
}
]
| [
"This paper puts forward a not entirely new, but also not sufficiently understood interpretation of dropout regularization.",
"The authors derive useful theorems that estimate or put bounds on key quantities that are of interest when analyzing dropout regularized networks from their perspective.",
"They furthermore introduce an explicit regularization term that should have a well understood impact on these key quantities.",
"In the experimental section they convincingly show that the proposed regularization indeed has the expected effect and that their perspective on dropout is therefore useful and meaningful.",
"Their proposed regularization also seems to have a positive impact on the models performance but they demonstrate this only on rel.",
"small scale benchmark problems.",
"I therefore don’t belief that this approach will have a large impact on how practitioner train models.",
"But their general perspective is well aligned with the recently proposed idea of “Dropout as a bayesian approximation” and the insights and theorems in this paper might enable future work in that direction."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 6,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 8
} | 0.25 | 0 | 0.625 | 0.75 | 0.5 | 0.125 | 0.25 | 0.125 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.625 | 2.230439 | 0.394561 |
A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style. | CONTRIBUTIONS The authors propose a simple architectural modification (conditional instance normalization) for the task of feedforward neural style transfer that allows a single network to apply many different styles to input images. Experiments show that the proposed multi-style networks produce qualitatively similar images as single-style networks, train as fast as single-style networks, and achieve comparable losses as single-style networks. In addition, the authors shows that new styles can be incrementally added to multi-style networks with minimal finetuning, and that convex combinations of per-style parameters can be used for feedforward style blending. The authors have released open-source code and pretrained models allowing others to replicate the experimental results. NOVELTY The problem setup is very similar to prior work on feedforward neural style transfer, but the paper is the first to my knowledge that uses a single network to apply different styles to input images; the proposed conditional instance normalization layer is also novel. This paper is also the first that demonstrates feedforward neural style blending; though not described in published literature, optimization-based neural style blending had previously been demonstrated in https://github.com/jcjohnson/neural-style. MISSING CITATION The following paper was concurrent with Ulyanov et al (2016a) and Johnson et al in demonstrating feedforward neural style transfer, though it did not use the Gram-based formulation of Gatys et al: Li and Wand, *Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks*, ECCV 2016 CLARITY The paper is very well written and easy to follow. SIGNIFICANCE Though simple, the proposed method is a significant addition to the growing field of neural style transfer. Its benefits are especially clear for mobile applications, which are often constrained in both disk space and bandwidth. Using the proposed method, only a single trained network needs to be transmitted and stored on the mobile device; in addition the ability of the proposed method to incrementally learn new styles means that new styles can be added by transmitting only a small number of new style-specific parameters to a mobile device. EVALUATION Like many other papers on neural style transfer, the results are mostly qualitative. Following existing literature, the authors use style and content loss as a quantitative measure of quality, but these metrics are unfortunately not always well-correlated with the perceptual quality of the results. I find the results of this paper convincing, but I wish that this and other papers on this topic could find a way to evaluate their results more quantitatively. SUMMARY The problem and method are slightly incremental, but the several improvements over prior work make this paper a significant addition to the growing literature on neural style transfer. The paper is well-written and its experiments convincingly validate the benefits of the method. Overall I believe the paper would be a valuable addition to the conference. Pros - Simple modification to feedforward neural style transfer with several improvements over prior work - Strong qualitative results - Well-written - Open-source code has already been released Cons - Slightly incremental - Somewhat lacking in quantitative evaluation, but not any more so than prior work on this topic | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Learned Representation For Artistic Style\nThe diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nCONTRIBUTIONS The authors propose a simple architectural modification (conditional instance normalization) for the task of feedforward neural style transfer that allows a single network to apply many different styles to input images. Experiments show that the proposed multi-style networks produce qualitatively similar images as single-style networks, train as fast as single-style networks, and achieve comparable losses as single-style networks. In addition, the authors shows that new styles can be incrementally added to multi-style networks with minimal finetuning, and that convex combinations of per-style parameters can be used for feedforward style blending. The authors have released open-source code and pretrained models allowing others to replicate the experimental results. NOVELTY The problem setup is very similar to prior work on feedforward neural style transfer, but the paper is the first to my knowledge that uses a single network to apply different styles to input images; the proposed conditional instance normalization layer is also novel. This paper is also the first that demonstrates feedforward neural style blending; though not described in published literature, optimization-based neural style blending had previously been demonstrated in https://github.com/jcjohnson/neural-style. MISSING CITATION The following paper was concurrent with Ulyanov et al (2016a) and Johnson et al in demonstrating feedforward neural style transfer, though it did not use the Gram-based formulation of Gatys et al: Li and Wand, *Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks*, ECCV 2016 CLARITY The paper is very well written and easy to follow. SIGNIFICANCE Though simple, the proposed method is a significant addition to the growing field of neural style transfer. Its benefits are especially clear for mobile applications, which are often constrained in both disk space and bandwidth. Using the proposed method, only a single trained network needs to be transmitted and stored on the mobile device; in addition the ability of the proposed method to incrementally learn new styles means that new styles can be added by transmitting only a small number of new style-specific parameters to a mobile device. EVALUATION Like many other papers on neural style transfer, the results are mostly qualitative. Following existing literature, the authors use style and content loss as a quantitative measure of quality, but these metrics are unfortunately not always well-correlated with the perceptual quality of the results. I find the results of this paper convincing, but I wish that this and other papers on this topic could find a way to evaluate their results more quantitatively. SUMMARY The problem and method are slightly incremental, but the several improvements over prior work make this paper a significant addition to the growing literature on neural style transfer. The paper is well-written and its experiments convincingly validate the benefits of the method. Overall I believe the paper would be a valuable addition to the conference. Pros - Simple modification to feedforward neural style transfer with several improvements over prior work - Strong qualitative results - Well-written - Open-source code has already been released Cons - Slightly incremental - Somewhat lacking in quantitative evaluation, but not any more so than prior work on this topic",
"role": "assistant"
}
]
| [
"CONTRIBUTIONS The authors propose a simple architectural modification (conditional instance normalization) for the task of feedforward neural style transfer that allows a single network to apply many different styles to input images.",
"Experiments show that the proposed multi-style networks produce qualitatively similar images as single-style networks, train as fast as single-style networks, and achieve comparable losses as single-style networks.",
"In addition, the authors shows that new styles can be incrementally added to multi-style networks with minimal finetuning, and that convex combinations of per-style parameters can be used for feedforward style blending.",
"The authors have released open-source code and pretrained models allowing others to replicate the experimental results.",
"NOVELTY The problem setup is very similar to prior work on feedforward neural style transfer, but the paper is the first to my knowledge that uses a single network to apply different styles to input images; the proposed conditional instance normalization layer is also novel.",
"This paper is also the first that demonstrates feedforward neural style blending; though not described in published literature, optimization-based neural style blending had previously been demonstrated in https://github.com/jcjohnson/neural-style.",
"MISSING CITATION The following paper was concurrent with Ulyanov et al (2016a) and Johnson et al in demonstrating feedforward neural style transfer, though it did not use the Gram-based formulation of Gatys et al: Li and Wand, *Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks*, ECCV 2016 CLARITY The paper is very well written and easy to follow.",
"SIGNIFICANCE Though simple, the proposed method is a significant addition to the growing field of neural style transfer.",
"Its benefits are especially clear for mobile applications, which are often constrained in both disk space and bandwidth.",
"Using the proposed method, only a single trained network needs to be transmitted and stored on the mobile device; in addition the ability of the proposed method to incrementally learn new styles means that new styles can be added by transmitting only a small number of new style-specific parameters to a mobile device.",
"EVALUATION Like many other papers on neural style transfer, the results are mostly qualitative.",
"Following existing literature, the authors use style and content loss as a quantitative measure of quality, but these metrics are unfortunately not always well-correlated with the perceptual quality of the results.",
"I find the results of this paper convincing, but I wish that this and other papers on this topic could find a way to evaluate their results more quantitatively.",
"SUMMARY The problem and method are slightly incremental, but the several improvements over prior work make this paper a significant addition to the growing literature on neural style transfer.",
"The paper is well-written and its experiments convincingly validate the benefits of the method.",
"Overall I believe the paper would be a valuable addition to the conference.",
"Pros - Simple modification to feedforward neural style transfer with several improvements over prior work - Strong qualitative results - Well-written - Open-source code has already been released Cons - Slightly incremental - Somewhat lacking in quantitative evaluation, but not any more so than prior work on this topic"
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 14,
"praise": 5,
"presentation_and_reporting": 3,
"results_and_discussion": 6,
"suggestion_and_solution": 4,
"total": 17
} | 0.117647 | 0 | 0.235294 | 0.823529 | 0.294118 | 0.176471 | 0.352941 | 0.235294 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.235294 | 1.982775 | 0.252519 |
Hadamard Product for Low-rank Bilinear Pooling | Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property. | Results on the VQA task are good for this simple model, the ablation study of table 1 gives some insights as to what is important. Missing are some explanations about the language embedding and the importance in deciding embedding dimension and final output dimension, equivalent to deciding the projected dimension in the compact bilinear model. Since the main contribution of the paper seems to be slightly better performance with fairly large reduction in parameters vs. compact bilinear something should be said about choice of those hyper parameters. If you increase embedded and output dimensions to equalize parameters to the compact bilinear model are further gains possible? How is the question encoded? Is word order preserved in this encoding, the compact bilinear model compared to in table 1 mentions glove, the proposed model is using this as well? The meaning of visual attention in this model along with the number of glimpses should be tied to the sentence embedding, so now we are looking at particular spatial components when that part of the sentence is encoded, then we stack according to your equation 9? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHadamard Product for Low-rank Bilinear Pooling\nBilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nResults on the VQA task are good for this simple model, the ablation study of table 1 gives some insights as to what is important. Missing are some explanations about the language embedding and the importance in deciding embedding dimension and final output dimension, equivalent to deciding the projected dimension in the compact bilinear model. Since the main contribution of the paper seems to be slightly better performance with fairly large reduction in parameters vs. compact bilinear something should be said about choice of those hyper parameters. If you increase embedded and output dimensions to equalize parameters to the compact bilinear model are further gains possible? How is the question encoded? Is word order preserved in this encoding, the compact bilinear model compared to in table 1 mentions glove, the proposed model is using this as well? The meaning of visual attention in this model along with the number of glimpses should be tied to the sentence embedding, so now we are looking at particular spatial components when that part of the sentence is encoded, then we stack according to your equation 9?",
"role": "assistant"
}
]
| [
"Results on the VQA task are good for this simple model, the ablation study of table 1 gives some insights as to what is important.",
"Missing are some explanations about the language embedding and the importance in deciding embedding dimension and final output dimension, equivalent to deciding the projected dimension in the compact bilinear model.",
"Since the main contribution of the paper seems to be slightly better performance with fairly large reduction in parameters vs. compact bilinear something should be said about choice of those hyper parameters.",
"If you increase embedded and output dimensions to equalize parameters to the compact bilinear model are further gains possible?",
"How is the question encoded?",
"Is word order preserved in this encoding, the compact bilinear model compared to in table 1 mentions glove, the proposed model is using this as well?",
"The meaning of visual attention in this model along with the number of glimpses should be tied to the sentence embedding, so now we are looking at particular spatial components when that part of the sentence is encoded, then we stack according to your equation 9?"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 4,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 7
} | 0.142857 | 0 | 0.285714 | 0.857143 | 0.142857 | 0.571429 | 0.142857 | 0.428571 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.571429 | 2.003261 | 0.568167 |
Gated Multimodal Units for Information Fusion | This paper presents a novel model for multimodal learning based on gated neural networks. The Gated Multimodal Unit (GMU) model is intended to be used as an internal unit in a neural network architecture whose purpose is to find an intermediate representation based on a combination of data from different modalities. The GMU learns to decide how modalities influence the activation of the unit using multiplicative gates. It was evaluated on a multilabel scenario for genre classification of movies using the plot and the poster. The GMU improved the macro f-score performance of single-modality approaches and outperformed other fusion strategies, including mixture of experts models. Along with this work, the MM-IMDb dataset is released which, to the best of our knowledge, is the largest publicly available multimodal dataset for genre prediction on movies. | The paper introduces Gated Multimodal Units GMUs, which use multiplicative weights to select the degree to which a hidden unit will consider different modalities in determining its activation. The paper also introduces a new dataset, *Multimodal IMDb,* consisting of over 25k movie summaries, with their posters, and labeled genres. GMUs are related to *mixture of experts* in that different examples will be classified by different parts of the model, (but rather than routing/gating entire examples, individual hidden units are gated separately). They are related to attention models in that different parts of the input are weighted differently; there the emphasis is on gating modalities of input. The dataset is a very nice contribution, and there are many experiments varying text representation and single-modality vs two-modality. What the paper is lacking is a careful discussion, experimentation and analysis in comparison to other multiplicative gate models---which is the core intellectual contribution of the paper. For example, I could imagine that a mixture of experts or attention models or other gated models might perform very well, and at the very least provide interesting scientific comparative analysis. I encourage the authors to continue the work, and submit a revised paper when ready. As is, I consider the paper to be a good workshop paper, but not ready for a major conference. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGated Multimodal Units for Information Fusion\nThis paper presents a novel model for multimodal learning based on gated neural networks. The Gated Multimodal Unit (GMU) model is intended to be used as an internal unit in a neural network architecture whose purpose is to find an intermediate representation based on a combination of data from different modalities. The GMU learns to decide how modalities influence the activation of the unit using multiplicative gates. It was evaluated on a multilabel scenario for genre classification of movies using the plot and the poster. The GMU improved the macro f-score performance of single-modality approaches and outperformed other fusion strategies, including mixture of experts models. Along with this work, the MM-IMDb dataset is released which, to the best of our knowledge, is the largest publicly available multimodal dataset for genre prediction on movies.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper introduces Gated Multimodal Units GMUs, which use multiplicative weights to select the degree to which a hidden unit will consider different modalities in determining its activation. The paper also introduces a new dataset, *Multimodal IMDb,* consisting of over 25k movie summaries, with their posters, and labeled genres. GMUs are related to *mixture of experts* in that different examples will be classified by different parts of the model, (but rather than routing/gating entire examples, individual hidden units are gated separately). They are related to attention models in that different parts of the input are weighted differently; there the emphasis is on gating modalities of input. The dataset is a very nice contribution, and there are many experiments varying text representation and single-modality vs two-modality. What the paper is lacking is a careful discussion, experimentation and analysis in comparison to other multiplicative gate models---which is the core intellectual contribution of the paper. For example, I could imagine that a mixture of experts or attention models or other gated models might perform very well, and at the very least provide interesting scientific comparative analysis. I encourage the authors to continue the work, and submit a revised paper when ready. As is, I consider the paper to be a good workshop paper, but not ready for a major conference.",
"role": "assistant"
}
]
| [
"The paper introduces Gated Multimodal Units GMUs, which use multiplicative weights to select the degree to which a hidden unit will consider different modalities in determining its activation.",
"The paper also introduces a new dataset, *Multimodal IMDb,* consisting of over 25k movie summaries, with their posters, and labeled genres.",
"GMUs are related to *mixture of experts* in that different examples will be classified by different parts of the model, (but rather than routing/gating entire examples, individual hidden units are gated separately).",
"They are related to attention models in that different parts of the input are weighted differently; there the emphasis is on gating modalities of input.",
"The dataset is a very nice contribution, and there are many experiments varying text representation and single-modality vs two-modality.",
"What the paper is lacking is a careful discussion, experimentation and analysis in comparison to other multiplicative gate models---which is the core intellectual contribution of the paper.",
"For example, I could imagine that a mixture of experts or attention models or other gated models might perform very well, and at the very least provide interesting scientific comparative analysis.",
"I encourage the authors to continue the work, and submit a revised paper when ready.",
"As is, I consider the paper to be a good workshop paper, but not ready for a major conference."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 7,
"praise": 4,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 9
} | 0.222222 | 0 | 0.333333 | 0.777778 | 0.444444 | 0.222222 | 0.111111 | 0.222222 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.080814 | 0.252519 |
Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations | We propose a language-agnostic way of automatically generating sets of semantically similar clusters of entities along with sets of *outlier* elements, which may then be used to perform an intrinsic evaluation of word embeddings in the outlier detection task. We used our methodology to create a gold-standard dataset, which we call WikiSem500, and evaluated multiple state-of-the-art embeddings. The results show a correlation between performance on this dataset and performance on sentiment analysis. | This paper introduces a new dataset to evaluate word representations. The task considered in the paper, called outlier detection (also known as word intrusion), is to identify which word does not belong to a set of semantically related words. The task was proposed by Camacho-Collados & Navigli (2016) as an evaluation of word representations. The main contribution of this paper is to introduce a new dataset for this task, covering 5 languages. The dataset was generated automatically from the Wikidata hierarchy. Entities which are instances of the same category are considered as belonging to the same cluster, and outliers are sampled at various distances in the tree. Several heuristics are then proposed to exclude uninteresting clusters from the dataset. Developing good ressources to evaluate word representations is an important task. The new dataset introduced in this paper might be an interesting addition to the existing ones (however, it is hard to say by only reviewing the paper). I am a bit concerned by the lack of discussion and comparison with existing approaches (besides word similarity datasets). In particular, I believe it would be interesting to discuss the advantages of this evaluation/dataset, compared to existing ones such as word analogies. The proposed evaluation also seems highly related to entity typing, which is not discussed in the paper. Overall, I believe that introducing ressources for evaluating word representations is very important for the community. However, I am a bit ambivalent about this submission. I am not entirely convinced that the proposed dataset have clear advantages over existing ressources. It also seems that existing tasks, such as entity typing, already capture similar properties of word representations. Finally, it might be more relevant to submit this paper to LREC than to ICLR. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAutomated Generation of Multilingual Clusters for the Evaluation of Distributed Representations\nWe propose a language-agnostic way of automatically generating sets of semantically similar clusters of entities along with sets of *outlier* elements, which may then be used to perform an intrinsic evaluation of word embeddings in the outlier detection task. We used our methodology to create a gold-standard dataset, which we call WikiSem500, and evaluated multiple state-of-the-art embeddings. The results show a correlation between performance on this dataset and performance on sentiment analysis.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a new dataset to evaluate word representations. The task considered in the paper, called outlier detection (also known as word intrusion), is to identify which word does not belong to a set of semantically related words. The task was proposed by Camacho-Collados & Navigli (2016) as an evaluation of word representations. The main contribution of this paper is to introduce a new dataset for this task, covering 5 languages. The dataset was generated automatically from the Wikidata hierarchy. Entities which are instances of the same category are considered as belonging to the same cluster, and outliers are sampled at various distances in the tree. Several heuristics are then proposed to exclude uninteresting clusters from the dataset. Developing good ressources to evaluate word representations is an important task. The new dataset introduced in this paper might be an interesting addition to the existing ones (however, it is hard to say by only reviewing the paper). I am a bit concerned by the lack of discussion and comparison with existing approaches (besides word similarity datasets). In particular, I believe it would be interesting to discuss the advantages of this evaluation/dataset, compared to existing ones such as word analogies. The proposed evaluation also seems highly related to entity typing, which is not discussed in the paper. Overall, I believe that introducing ressources for evaluating word representations is very important for the community. However, I am a bit ambivalent about this submission. I am not entirely convinced that the proposed dataset have clear advantages over existing ressources. It also seems that existing tasks, such as entity typing, already capture similar properties of word representations. Finally, it might be more relevant to submit this paper to LREC than to ICLR.",
"role": "assistant"
}
]
| [
"This paper introduces a new dataset to evaluate word representations.",
"The task considered in the paper, called outlier detection (also known as word intrusion), is to identify which word does not belong to a set of semantically related words.",
"The task was proposed by Camacho-Collados & Navigli (2016) as an evaluation of word representations.",
"The main contribution of this paper is to introduce a new dataset for this task, covering 5 languages.",
"The dataset was generated automatically from the Wikidata hierarchy.",
"Entities which are instances of the same category are considered as belonging to the same cluster, and outliers are sampled at various distances in the tree.",
"Several heuristics are then proposed to exclude uninteresting clusters from the dataset.",
"Developing good ressources to evaluate word representations is an important task.",
"The new dataset introduced in this paper might be an interesting addition to the existing ones (however, it is hard to say by only reviewing the paper).",
"I am a bit concerned by the lack of discussion and comparison with existing approaches (besides word similarity datasets).",
"In particular, I believe it would be interesting to discuss the advantages of this evaluation/dataset, compared to existing ones such as word analogies.",
"The proposed evaluation also seems highly related to entity typing, which is not discussed in the paper.",
"Overall, I believe that introducing ressources for evaluating word representations is very important for the community.",
"However, I am a bit ambivalent about this submission.",
"I am not entirely convinced that the proposed dataset have clear advantages over existing ressources.",
"It also seems that existing tasks, such as entity typing, already capture similar properties of word representations.",
"Finally, it might be more relevant to submit this paper to LREC than to ICLR."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 12,
"praise": 2,
"presentation_and_reporting": 9,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 17
} | 0.294118 | 0 | 0.294118 | 0.705882 | 0.117647 | 0.529412 | 0.058824 | 0.176471 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.176471 | 1.923952 | 0.252519 |
Programming With a Differentiable Forth Interpreter | There are families of neural networks that can learn to compute any function, provided sufficient training data. However, given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model. Here we consider the case of prior procedural knowledge, such as knowing the overall recursive structure of a sequence transduction program or the fact that a program will likely use arithmetic operations on real numbers to solve a task. To this end we present a differentiable interpreter for the programming language Forth. Through a neural implementation of the dual stack machine that underlies Forth, programmers can write program sketches with slots that can be filled with behaviour trained from program input-output data. As the program interpreter is end-to-end differentiable, we can optimize this behaviour directly through gradient descent techniques on user specified objectives, and also integrate the program into any larger neural computation graph. We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex transduction tasks such as sequence sorting or addition with substantially less data and better generalisation over problem sizes. In addition, we introduce neural program optimisations based on symbolic computation and parallel branching that lead to significant speed improvements. | This paper presents an approach to make a programming language (Forth) interpreter differentiable such that it can learn the implementation of high-level instruction from provided examples. The paper is well-written and the research is well-motivated. Overall, I find this paper is interesting and pleasure to read. However, the experiments only serve as proof of concept. A more detailed empirical studies can strength the paper. Comments: - To my knowledge, the proposed approach is novel and nicely bridge programming by example and sketches by programmers. The proposed approach borrow some ideas from probabilistic programming and Neural Turing Machine, but it is significantly different from these methods. It also presents optimisations of the interpreter to speed-up the training. - It would be interesting to present results on different types of programming problems and see how complex of low-level code can be generated. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nProgramming With a Differentiable Forth Interpreter\nThere are families of neural networks that can learn to compute any function, provided sufficient training data. However, given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model. Here we consider the case of prior procedural knowledge, such as knowing the overall recursive structure of a sequence transduction program or the fact that a program will likely use arithmetic operations on real numbers to solve a task. To this end we present a differentiable interpreter for the programming language Forth. Through a neural implementation of the dual stack machine that underlies Forth, programmers can write program sketches with slots that can be filled with behaviour trained from program input-output data. As the program interpreter is end-to-end differentiable, we can optimize this behaviour directly through gradient descent techniques on user specified objectives, and also integrate the program into any larger neural computation graph. We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex transduction tasks such as sequence sorting or addition with substantially less data and better generalisation over problem sizes. In addition, we introduce neural program optimisations based on symbolic computation and parallel branching that lead to significant speed improvements.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents an approach to make a programming language (Forth) interpreter differentiable such that it can learn the implementation of high-level instruction from provided examples. The paper is well-written and the research is well-motivated. Overall, I find this paper is interesting and pleasure to read. However, the experiments only serve as proof of concept. A more detailed empirical studies can strength the paper. Comments: - To my knowledge, the proposed approach is novel and nicely bridge programming by example and sketches by programmers. The proposed approach borrow some ideas from probabilistic programming and Neural Turing Machine, but it is significantly different from these methods. It also presents optimisations of the interpreter to speed-up the training. - It would be interesting to present results on different types of programming problems and see how complex of low-level code can be generated.",
"role": "assistant"
}
]
| [
"This paper presents an approach to make a programming language (Forth) interpreter differentiable such that it can learn the implementation of high-level instruction from provided examples.",
"The paper is well-written and the research is well-motivated.",
"Overall, I find this paper is interesting and pleasure to read.",
"However, the experiments only serve as proof of concept.",
"A more detailed empirical studies can strength the paper.",
"Comments: - To my knowledge, the proposed approach is novel and nicely bridge programming by example and sketches by programmers.",
"The proposed approach borrow some ideas from probabilistic programming and Neural Turing Machine, but it is significantly different from these methods.",
"It also presents optimisations of the interpreter to speed-up the training.",
"- It would be interesting to present results on different types of programming problems and see how complex of low-level code can be generated."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 4,
"praise": 3,
"presentation_and_reporting": 6,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 9
} | 0.111111 | 0 | 0.444444 | 0.444444 | 0.333333 | 0.666667 | 0.111111 | 0.222222 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.080814 | 0.252519 |
Recursive Regression with Neural Networks: Approximating the HJI PDE Solution | Most machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs PDE and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must. | This paper presents an algorithm for approximating the solution of certain time-evolution PDEs. The paper presents an interesting learning-based approach to solve such PDEs. The idea is to alternate between: 1. sampling points in space-time 2. generating solution to PDE at *those* sampled points 3. regressing a space-time function to satisfy the latter solutions at the sampled points (and hopefully generalize beyond those points). I actually find the proposed algorithm interesting, and potentially useful in practice. The classic grid-based simulation of PDEs is often too expensive to be practical, due to the curse of dimensionality. Hence, learning the solution of PDEs makes a lot of sense for practical settings. On the other hand, as the authors point out, simply running gradient descent on the regression loss function does not work, because of the non-differentiablity of the *min* that shows up in the studied PDEs. Therefore, I think the proposed idea is actually very interesting approach to learning the PDE solution in presence of non-differentability, which is indeed a *challenging* setup for numerically solving PDEs. The paper motivates the problem (time-evolution PDE with *min* operator applied to the spatial derivatives) by applications in control thery, but I think there is more direct interest in such problems for the machine learning community, and even deep learning community. For example http://link.springer.com/chapter/10.1007/978-3-319-14612-6_4 studies approximate solution to PDEs with very similar properties (evolution+*min*) to develop new optimization algorithms. The latter is indeed used to training deep networks: https://arxiv.org/abs/1601.04114 I think this work would catch even more attention if the authors could show some experiments with higher-dimensional problems (where grid-based methods are absolutely inapplicable). | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRecursive Regression with Neural Networks: Approximating the HJI PDE Solution\nMost machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs PDE and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents an algorithm for approximating the solution of certain time-evolution PDEs. The paper presents an interesting learning-based approach to solve such PDEs. The idea is to alternate between: 1. sampling points in space-time 2. generating solution to PDE at *those* sampled points 3. regressing a space-time function to satisfy the latter solutions at the sampled points (and hopefully generalize beyond those points). I actually find the proposed algorithm interesting, and potentially useful in practice. The classic grid-based simulation of PDEs is often too expensive to be practical, due to the curse of dimensionality. Hence, learning the solution of PDEs makes a lot of sense for practical settings. On the other hand, as the authors point out, simply running gradient descent on the regression loss function does not work, because of the non-differentiablity of the *min* that shows up in the studied PDEs. Therefore, I think the proposed idea is actually very interesting approach to learning the PDE solution in presence of non-differentability, which is indeed a *challenging* setup for numerically solving PDEs. The paper motivates the problem (time-evolution PDE with *min* operator applied to the spatial derivatives) by applications in control thery, but I think there is more direct interest in such problems for the machine learning community, and even deep learning community. For example http://link.springer.com/chapter/10.1007/978-3-319-14612-6_4 studies approximate solution to PDEs with very similar properties (evolution+*min*) to develop new optimization algorithms. The latter is indeed used to training deep networks: https://arxiv.org/abs/1601.04114 I think this work would catch even more attention if the authors could show some experiments with higher-dimensional problems (where grid-based methods are absolutely inapplicable).",
"role": "assistant"
}
]
| [
"This paper presents an algorithm for approximating the solution of certain time-evolution PDEs.",
"The paper presents an interesting learning-based approach to solve such PDEs.",
"The idea is to alternate between: 1. sampling points in space-time 2. generating solution to PDE at *those* sampled points 3. regressing a space-time function to satisfy the latter solutions at the sampled points (and hopefully generalize beyond those points).",
"I actually find the proposed algorithm interesting, and potentially useful in practice.",
"The classic grid-based simulation of PDEs is often too expensive to be practical, due to the curse of dimensionality.",
"Hence, learning the solution of PDEs makes a lot of sense for practical settings.",
"On the other hand, as the authors point out, simply running gradient descent on the regression loss function does not work, because of the non-differentiablity of the *min* that shows up in the studied PDEs.",
"Therefore, I think the proposed idea is actually very interesting approach to learning the PDE solution in presence of non-differentability, which is indeed a *challenging* setup for numerically solving PDEs.",
"The paper motivates the problem (time-evolution PDE with *min* operator applied to the spatial derivatives) by applications in control thery, but I think there is more direct interest in such problems for the machine learning community, and even deep learning community.",
"For example http://link.springer.com/chapter/10.1007/978-3-319-14612-6_4 studies approximate solution to PDEs with very similar properties (evolution+*min*) to develop new optimization algorithms.",
"The latter is indeed used to training deep networks: https://arxiv.org/abs/1601.04114 I think this work would catch even more attention if the authors could show some experiments with higher-dimensional problems (where grid-based methods are absolutely inapplicable)."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 7,
"materials_and_methods": 9,
"praise": 3,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 11
} | 0.181818 | 0 | 0.636364 | 0.818182 | 0.272727 | 0 | 0.272727 | 0.090909 | 11 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.272727 | 2.209598 | 0.06313 |
Nonparametrically Learning Activation Functions in Deep Neural Nets | We provide a principled framework for nonparametrically learning activation functions in deep neural networks. Currently, state-of-the-art deep networks treat choice of activation function as a hyper-parameter before training. By allowing activation functions to be estimated as part of the training procedure, we expand the class of functions that each node in the network can learn. We also provide a theoretical justification for our choice of nonparametric activation functions and demonstrate that networks with our nonparametric activation functions generalize well. To demonstrate the power of our novel techniques, we test them on image recognition datasets and achieve up to a 15% relative increase in test performance compared to the baseline. | This paper provides a principled framework for nonparametrically learning activation functions in deep neural networks. A theoretical justification for authors* choice of nonparametric activation functions is given. Theoretical results are satisfactory but I particularly like the experimental setup where their methods are tested on image recognition datasets and achieve up to a 15% relative increase in test performance compared to the baseline. Well-written paper and novel theoretical techniques. The intuition behind the proof of Theorem 4.7 can be given in a little bit more clear way in the main body of the paper, but the Appendix clarifies everything. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNonparametrically Learning Activation Functions in Deep Neural Nets\nWe provide a principled framework for nonparametrically learning activation functions in deep neural networks. Currently, state-of-the-art deep networks treat choice of activation function as a hyper-parameter before training. By allowing activation functions to be estimated as part of the training procedure, we expand the class of functions that each node in the network can learn. We also provide a theoretical justification for our choice of nonparametric activation functions and demonstrate that networks with our nonparametric activation functions generalize well. To demonstrate the power of our novel techniques, we test them on image recognition datasets and achieve up to a 15% relative increase in test performance compared to the baseline.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper provides a principled framework for nonparametrically learning activation functions in deep neural networks. A theoretical justification for authors* choice of nonparametric activation functions is given. Theoretical results are satisfactory but I particularly like the experimental setup where their methods are tested on image recognition datasets and achieve up to a 15% relative increase in test performance compared to the baseline. Well-written paper and novel theoretical techniques. The intuition behind the proof of Theorem 4.7 can be given in a little bit more clear way in the main body of the paper, but the Appendix clarifies everything.",
"role": "assistant"
}
]
| [
"This paper provides a principled framework for nonparametrically learning activation functions in deep neural networks.",
"A theoretical justification for authors* choice of nonparametric activation functions is given.",
"Theoretical results are satisfactory but I particularly like the experimental setup where their methods are tested on image recognition datasets and achieve up to a 15% relative increase in test performance compared to the baseline.",
"Well-written paper and novel theoretical techniques.",
"The intuition behind the proof of Theorem 4.7 can be given in a little bit more clear way in the main body of the paper, but the Appendix clarifies everything."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 4,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 5
} | 0 | 0 | 0.2 | 0.8 | 0.4 | 0.4 | 0.2 | 0.2 | 5 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 1.189924 | 1.010076 |
Deep Learning with Sets and Point Clouds | We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set. We use deep permutation-invariant networks to perform point-could classification and MNIST digit summation, where in both cases the output is invariant to permutations of the input. In a semi-supervised setting, where the goal is make predictions for each instance within a set, we demonstrate the usefulness of this type of layer in set-outlier detection as well as semi-supervised learning with clustering side-information. | Pros : - New and clear formalism for invariance on signals with known structure - Good numerical results Cons : - The structure must be specified. - The set structure dataset is too simple - There is a gap between the large (and sometimes complex) theory introduced and the numerical experiments ; consequently a new reader could be lost since examples might be missing Besides, from a personal point of view, I think the topic of the paper and its content could be suitable for a big conference as the author improves its content. Thus, if rejected, I think you should not consider the workshop option for your paper if you wish to publish it later in a conference, because big conferences might consider the workshop papers of ICLR as publications. (that*s an issue I had to deal with at some points) | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Learning with Sets and Point Clouds\nWe introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set. We use deep permutation-invariant networks to perform point-could classification and MNIST digit summation, where in both cases the output is invariant to permutations of the input. In a semi-supervised setting, where the goal is make predictions for each instance within a set, we demonstrate the usefulness of this type of layer in set-outlier detection as well as semi-supervised learning with clustering side-information.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nPros : - New and clear formalism for invariance on signals with known structure - Good numerical results Cons : - The structure must be specified. - The set structure dataset is too simple - There is a gap between the large (and sometimes complex) theory introduced and the numerical experiments ; consequently a new reader could be lost since examples might be missing Besides, from a personal point of view, I think the topic of the paper and its content could be suitable for a big conference as the author improves its content. Thus, if rejected, I think you should not consider the workshop option for your paper if you wish to publish it later in a conference, because big conferences might consider the workshop papers of ICLR as publications. (that*s an issue I had to deal with at some points)",
"role": "assistant"
}
]
| [
"Pros : - New and clear formalism for invariance on signals with known structure - Good numerical results Cons : - The structure must be specified.",
"- The set structure dataset is too simple - There is a gap between the large (and sometimes complex) theory introduced and the numerical experiments ; consequently a new reader could be lost since examples might be missing Besides, from a personal point of view, I think the topic of the paper and its content could be suitable for a big conference as the author improves its content.",
"Thus, if rejected, I think you should not consider the workshop option for your paper if you wish to publish it later in a conference, because big conferences might consider the workshop papers of ICLR as publications.",
"(that*s an issue I had to deal with at some points)"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 2,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 4
} | 0.25 | 0 | 0 | 0.5 | 0.25 | 0.5 | 0.25 | 0.75 | 4 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.5 | 1.221623 | 1.278377 |
Learning to Discover Sparse Graphical Models | We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. In the setting of Gaussian Graphical Models (GGMs) a popular estimator is a maximum likelihood objective with a penalization on the precision matrix. Adapting this estimator to capture domain-specific knowledge as priors or a new data likelihood requires great effort. In addition, structure recovery is an indirect consequence of the data-fit term. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function mapping from empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. We apply this framework to several real-world problems in structure discovery and show that it can be competitive to standard approaches such as graphical lasso, at a fraction of the execution speed. We use convolutional neural networks to parametrize our estimators due to the compositional structure of the problem. Experimentally, our learnable graph-discovery method trained on synthetic data generalizes well to different data: identifying relevant edges in real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain competitive(and generally superior) performance, compared with analytical methods. | I sincerely apologize for the late-arriving review. This paper proposes to frame the problem of structure estimation as a supervised classification problem. The input is an empirical covariance matrix of the observed data, the output the binary decision whether or not two variables share a link. The paper is sufficiently clear, the goals are clear and everything is well described. The main interesting point is the empirical results of the experimental section. The approach is simple and performs better than previous non-learning based methods. This observation is interesting and will be of interest in structure discovery problems. I rate the specific construction of the supervised learning method as a reasonable attempt attempt to approach this problem. There is not very much technical novelty in this part. E.g., an algorithmic contribution would have been a method that is invariant to data permutation could have been a possible target for a technical contribution. The paper makes no claims on this technical part, as said, the method is well constructed and well executed. It is good to precisely state the theoretical parts of a paper, the authors do this well. All results are rather straight-forward, I like that the claims are written down, but there is little surprise in the statements. In summary, the paper makes a very interesting observation. Graph estimation can be posed as a supervised learning problem and training data from a separate source is sufficient to learn structure in novel and unseen test data from a new source. Practically this may be relevant, on one hand the empirical results are stronger with this method, on the other hand a practitioner who is interested in structural discovery may have side constraints about interpretability of the deriving method. From the Discussion and Conclusion I understand that the authors consider this as future work. It is a good first step, it could be stronger but also stands on its own already. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning to Discover Sparse Graphical Models\nWe consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. In the setting of Gaussian Graphical Models (GGMs) a popular estimator is a maximum likelihood objective with a penalization on the precision matrix. Adapting this estimator to capture domain-specific knowledge as priors or a new data likelihood requires great effort. In addition, structure recovery is an indirect consequence of the data-fit term. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function mapping from empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. We apply this framework to several real-world problems in structure discovery and show that it can be competitive to standard approaches such as graphical lasso, at a fraction of the execution speed. We use convolutional neural networks to parametrize our estimators due to the compositional structure of the problem. Experimentally, our learnable graph-discovery method trained on synthetic data generalizes well to different data: identifying relevant edges in real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain competitive(and generally superior) performance, compared with analytical methods.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI sincerely apologize for the late-arriving review. This paper proposes to frame the problem of structure estimation as a supervised classification problem. The input is an empirical covariance matrix of the observed data, the output the binary decision whether or not two variables share a link. The paper is sufficiently clear, the goals are clear and everything is well described. The main interesting point is the empirical results of the experimental section. The approach is simple and performs better than previous non-learning based methods. This observation is interesting and will be of interest in structure discovery problems. I rate the specific construction of the supervised learning method as a reasonable attempt attempt to approach this problem. There is not very much technical novelty in this part. E.g., an algorithmic contribution would have been a method that is invariant to data permutation could have been a possible target for a technical contribution. The paper makes no claims on this technical part, as said, the method is well constructed and well executed. It is good to precisely state the theoretical parts of a paper, the authors do this well. All results are rather straight-forward, I like that the claims are written down, but there is little surprise in the statements. In summary, the paper makes a very interesting observation. Graph estimation can be posed as a supervised learning problem and training data from a separate source is sufficient to learn structure in novel and unseen test data from a new source. Practically this may be relevant, on one hand the empirical results are stronger with this method, on the other hand a practitioner who is interested in structural discovery may have side constraints about interpretability of the deriving method. From the Discussion and Conclusion I understand that the authors consider this as future work. It is a good first step, it could be stronger but also stands on its own already.",
"role": "assistant"
}
]
| [
"I sincerely apologize for the late-arriving review.",
"This paper proposes to frame the problem of structure estimation as a supervised classification problem.",
"The input is an empirical covariance matrix of the observed data, the output the binary decision whether or not two variables share a link.",
"The paper is sufficiently clear, the goals are clear and everything is well described.",
"The main interesting point is the empirical results of the experimental section.",
"The approach is simple and performs better than previous non-learning based methods.",
"This observation is interesting and will be of interest in structure discovery problems.",
"I rate the specific construction of the supervised learning method as a reasonable attempt attempt to approach this problem.",
"There is not very much technical novelty in this part.",
"E.g., an algorithmic contribution would have been a method that is invariant to data permutation could have been a possible target for a technical contribution.",
"The paper makes no claims on this technical part, as said, the method is well constructed and well executed.",
"It is good to precisely state the theoretical parts of a paper, the authors do this well.",
"All results are rather straight-forward, I like that the claims are written down, but there is little surprise in the statements.",
"In summary, the paper makes a very interesting observation.",
"Graph estimation can be posed as a supervised learning problem and training data from a separate source is sufficient to learn structure in novel and unseen test data from a new source.",
"Practically this may be relevant, on one hand the empirical results are stronger with this method, on the other hand a practitioner who is interested in structural discovery may have side constraints about interpretability of the deriving method.",
"From the Discussion and Conclusion I understand that the authors consider this as future work.",
"It is a good first step, it could be stronger but also stands on its own already."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 6,
"materials_and_methods": 9,
"praise": 9,
"presentation_and_reporting": 4,
"results_and_discussion": 6,
"suggestion_and_solution": 3,
"total": 18
} | 0.166667 | 0 | 0.333333 | 0.5 | 0.5 | 0.222222 | 0.333333 | 0.166667 | 18 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.222222 | 1.827661 | 0.394561 |
Recurrent Normalization Propagation | We propose a LSTM parametrization that preserves the means and variances of the hidden states and memory cells across time. While having training benefits similar to Recurrent Batch Normalization and Layer Normalization, it does not need to estimate statistics at each time step, therefore, requiring fewer computations overall. We also investigate the parametrization impact on the gradient flows and present a way of initializing the weights accordingly. We evaluate our proposal on language modelling and image generative modelling tasks. We empirically show that it performs similarly or better than other recurrent normalization approaches, while being faster to execute. | The paper proposes an extension of weight normalization / normalization propagation to recurrent neural networks. Simple experiments suggest it works well. The contribution is potentially useful to a lot of people, as LSTMs are one of the basic building blocks in our field. The contribution is not extremely novel: the change with respect to weight normalization is minor. The experiments are also not very convincing: Layer normalization is reported to have higher test error as it overfits on their example, but in terms of optimization it seems to work better. Also the authors don*t seem to use the data dependent parameter init for weight normalization as proposed in that paper. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRecurrent Normalization Propagation\nWe propose a LSTM parametrization that preserves the means and variances of the hidden states and memory cells across time. While having training benefits similar to Recurrent Batch Normalization and Layer Normalization, it does not need to estimate statistics at each time step, therefore, requiring fewer computations overall. We also investigate the parametrization impact on the gradient flows and present a way of initializing the weights accordingly. We evaluate our proposal on language modelling and image generative modelling tasks. We empirically show that it performs similarly or better than other recurrent normalization approaches, while being faster to execute.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes an extension of weight normalization / normalization propagation to recurrent neural networks. Simple experiments suggest it works well. The contribution is potentially useful to a lot of people, as LSTMs are one of the basic building blocks in our field. The contribution is not extremely novel: the change with respect to weight normalization is minor. The experiments are also not very convincing: Layer normalization is reported to have higher test error as it overfits on their example, but in terms of optimization it seems to work better. Also the authors don*t seem to use the data dependent parameter init for weight normalization as proposed in that paper.",
"role": "assistant"
}
]
| [
"The paper proposes an extension of weight normalization / normalization propagation to recurrent neural networks.",
"Simple experiments suggest it works well.",
"The contribution is potentially useful to a lot of people, as LSTMs are one of the basic building blocks in our field.",
"The contribution is not extremely novel: the change with respect to weight normalization is minor.",
"The experiments are also not very convincing: Layer normalization is reported to have higher test error as it overfits on their example, but in terms of optimization it seems to work better.",
"Also the authors don*t seem to use the data dependent parameter init for weight normalization as proposed in that paper."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 6
} | 0.5 | 0 | 0.333333 | 0.833333 | 0.333333 | 0 | 0.166667 | 0 | 6 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 2.166667 | 1.393328 | 0.773339 |
Fast Chirplet Transform to Enhance CNN Machine Listening - Validation on Animal calls and Speech | The scattering framework offers an optimal hierarchical convolutional decomposition according to its kernels. Convolutional Neural Net (CNN) can be seen asan optimal kernel decomposition, nevertheless it requires large amount of trainingdata to learn its kernels. We propose a trade-off between these two approaches: a Chirplet kernel as an efficient Q constant bioacoustic representation to pretrainCNN. First we motivate Chirplet bioinspired auditory representation. Second we give the first algorithm (and code) of a Fast Chirplet Transform (FCT). Third, we demonstrate the computation efficiency of FCT on large environmental data base: months of Orca recordings, and 1000 Birds species from the LifeClef challenge. Fourth, we validate FCT on the vowels subset of the Speech TIMIT dataset. The results show that FCT accelerates CNN when it pretrains low level layers: it reduces training duration by -28% for birds classification, and by -26% for vowels classification. Scores are also enhanced by FCT pretraining, with a relative gain of +7.8% of Mean Average Precision on birds, and +2.3% of vowel accuracy against raw audio CNN. We conclude on perspectives on tonotopic FCT deep machine listening, and inter-species bioacoustic transfer learning to generalise the representation of animal communication systems. | While I understand the difficulty of collecting audio data from animals, I think this type of feature engineering does not go in the right direction. I would rather see a model than learns the feature representation from data. I would think it should be possible to collect a more substantial corpus in zoos / nature etc, and then train a generative model. The underlying learned feature representation could be then used to feed a classifier. I*m not familiar with the particularities of this task, it*s hard to judge the improvements by using chirplets. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFast Chirplet Transform to Enhance CNN Machine Listening - Validation on Animal calls and Speech\nThe scattering framework offers an optimal hierarchical convolutional decomposition according to its kernels. Convolutional Neural Net (CNN) can be seen asan optimal kernel decomposition, nevertheless it requires large amount of trainingdata to learn its kernels. We propose a trade-off between these two approaches: a Chirplet kernel as an efficient Q constant bioacoustic representation to pretrainCNN. First we motivate Chirplet bioinspired auditory representation. Second we give the first algorithm (and code) of a Fast Chirplet Transform (FCT). Third, we demonstrate the computation efficiency of FCT on large environmental data base: months of Orca recordings, and 1000 Birds species from the LifeClef challenge. Fourth, we validate FCT on the vowels subset of the Speech TIMIT dataset. The results show that FCT accelerates CNN when it pretrains low level layers: it reduces training duration by -28% for birds classification, and by -26% for vowels classification. Scores are also enhanced by FCT pretraining, with a relative gain of +7.8% of Mean Average Precision on birds, and +2.3% of vowel accuracy against raw audio CNN. We conclude on perspectives on tonotopic FCT deep machine listening, and inter-species bioacoustic transfer learning to generalise the representation of animal communication systems.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nWhile I understand the difficulty of collecting audio data from animals, I think this type of feature engineering does not go in the right direction. I would rather see a model than learns the feature representation from data. I would think it should be possible to collect a more substantial corpus in zoos / nature etc, and then train a generative model. The underlying learned feature representation could be then used to feed a classifier. I*m not familiar with the particularities of this task, it*s hard to judge the improvements by using chirplets.",
"role": "assistant"
}
]
| [
"While I understand the difficulty of collecting audio data from animals, I think this type of feature engineering does not go in the right direction.",
"I would rather see a model than learns the feature representation from data.",
"I would think it should be possible to collect a more substantial corpus in zoos / nature etc, and then train a generative model.",
"The underlying learned feature representation could be then used to feed a classifier.",
"I*m not familiar with the particularities of this task, it*s hard to judge the improvements by using chirplets."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 0,
"presentation_and_reporting": 2,
"results_and_discussion": 0,
"suggestion_and_solution": 3,
"total": 5
} | 0.4 | 0 | 0 | 0.8 | 0 | 0.4 | 0 | 0.6 | 5 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 2.2 | 1.189924 | 1.010076 |
Fast Chirplet Transform to Enhance CNN Machine Listening - Validation on Animal calls and Speech | The scattering framework offers an optimal hierarchical convolutional decomposition according to its kernels. Convolutional Neural Net (CNN) can be seen asan optimal kernel decomposition, nevertheless it requires large amount of trainingdata to learn its kernels. We propose a trade-off between these two approaches: a Chirplet kernel as an efficient Q constant bioacoustic representation to pretrainCNN. First we motivate Chirplet bioinspired auditory representation. Second we give the first algorithm (and code) of a Fast Chirplet Transform (FCT). Third, we demonstrate the computation efficiency of FCT on large environmental data base: months of Orca recordings, and 1000 Birds species from the LifeClef challenge. Fourth, we validate FCT on the vowels subset of the Speech TIMIT dataset. The results show that FCT accelerates CNN when it pretrains low level layers: it reduces training duration by -28% for birds classification, and by -26% for vowels classification. Scores are also enhanced by FCT pretraining, with a relative gain of +7.8% of Mean Average Precision on birds, and +2.3% of vowel accuracy against raw audio CNN. We conclude on perspectives on tonotopic FCT deep machine listening, and inter-species bioacoustic transfer learning to generalise the representation of animal communication systems. | Pros: - Introduction of a nice filter banks and its implementation - Good numerical results - Refinement of the representation via back propagation, and a demonstration that it speeds up learning Cons: - The algorithms (section 3.1) are not necessary, and they even affect the presentation of the paper. However, a source code would be great! - The link with a scattering transform is not clear - Sometimes (as mentionned in some of my comments), the writing could be improved. From a personal point of view, I also believe the negative points I mention can be easily removed. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFast Chirplet Transform to Enhance CNN Machine Listening - Validation on Animal calls and Speech\nThe scattering framework offers an optimal hierarchical convolutional decomposition according to its kernels. Convolutional Neural Net (CNN) can be seen asan optimal kernel decomposition, nevertheless it requires large amount of trainingdata to learn its kernels. We propose a trade-off between these two approaches: a Chirplet kernel as an efficient Q constant bioacoustic representation to pretrainCNN. First we motivate Chirplet bioinspired auditory representation. Second we give the first algorithm (and code) of a Fast Chirplet Transform (FCT). Third, we demonstrate the computation efficiency of FCT on large environmental data base: months of Orca recordings, and 1000 Birds species from the LifeClef challenge. Fourth, we validate FCT on the vowels subset of the Speech TIMIT dataset. The results show that FCT accelerates CNN when it pretrains low level layers: it reduces training duration by -28% for birds classification, and by -26% for vowels classification. Scores are also enhanced by FCT pretraining, with a relative gain of +7.8% of Mean Average Precision on birds, and +2.3% of vowel accuracy against raw audio CNN. We conclude on perspectives on tonotopic FCT deep machine listening, and inter-species bioacoustic transfer learning to generalise the representation of animal communication systems.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nPros: - Introduction of a nice filter banks and its implementation - Good numerical results - Refinement of the representation via back propagation, and a demonstration that it speeds up learning Cons: - The algorithms (section 3.1) are not necessary, and they even affect the presentation of the paper. However, a source code would be great! - The link with a scattering transform is not clear - Sometimes (as mentionned in some of my comments), the writing could be improved. From a personal point of view, I also believe the negative points I mention can be easily removed.",
"role": "assistant"
}
]
| [
"Pros: - Introduction of a nice filter banks and its implementation - Good numerical results - Refinement of the representation via back propagation, and a demonstration that it speeds up learning Cons: - The algorithms (section 3.1) are not necessary, and they even affect the presentation of the paper.",
"However, a source code would be great!",
"- The link with a scattering transform is not clear - Sometimes (as mentionned in some of my comments), the writing could be improved.",
"From a personal point of view, I also believe the negative points I mention can be easily removed."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 2,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 4
} | 0.25 | 0 | 0 | 0.5 | 0.25 | 0.5 | 0.25 | 0.75 | 4 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.5 | 1.221623 | 1.278377 |
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity | In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of , where is ReLU nonlinearity. We assume that the input follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters using loss. We first show that when , the nonlinear dynamics can be written in close form, and converges to with at least probability, if random weight initializations of proper standard derivation ( ) is used, verifying empirical practice. For networks with many ReLU nodes ( ), we apply our close form dynamics and prove that when the teacher parameters forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with loss. Simulations verify our theoretical analysis. | The paper proposes a convergence analysis of some two-layer NNs with ReLUs. It is not the first such analysis, but maybe it is novel on the assumptions used in the analysis, and the focus on ReLU nonlinearity that is pretty popular in practice. The paper is quite hard to read, with many English mistakes and typos. Nevertheless, the analysis seems to be generally correct. The novelty and the key insights are however not always well motivated or presented. And the argument that the work uses realistic assumptions (Gaussian inputs for example) as opposed to other works, is quite debatable actually. Overall, the paper looks like a correct analysis work, but its form is really suboptimal in terms of writing/presentation, and the novelty and relevance of the results are not always very clear, unfortunately. The main results and intuition should be more clearly presented, and details could be moved to appendices for example - that could only help to improve the visibility and impact of these interesting results. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSymmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity\nIn this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of , where is ReLU nonlinearity. We assume that the input follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters using loss. We first show that when , the nonlinear dynamics can be written in close form, and converges to with at least probability, if random weight initializations of proper standard derivation ( ) is used, verifying empirical practice. For networks with many ReLU nodes ( ), we apply our close form dynamics and prove that when the teacher parameters forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with loss. Simulations verify our theoretical analysis.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a convergence analysis of some two-layer NNs with ReLUs. It is not the first such analysis, but maybe it is novel on the assumptions used in the analysis, and the focus on ReLU nonlinearity that is pretty popular in practice. The paper is quite hard to read, with many English mistakes and typos. Nevertheless, the analysis seems to be generally correct. The novelty and the key insights are however not always well motivated or presented. And the argument that the work uses realistic assumptions (Gaussian inputs for example) as opposed to other works, is quite debatable actually. Overall, the paper looks like a correct analysis work, but its form is really suboptimal in terms of writing/presentation, and the novelty and relevance of the results are not always very clear, unfortunately. The main results and intuition should be more clearly presented, and details could be moved to appendices for example - that could only help to improve the visibility and impact of these interesting results.",
"role": "assistant"
}
]
| [
"The paper proposes a convergence analysis of some two-layer NNs with ReLUs.",
"It is not the first such analysis, but maybe it is novel on the assumptions used in the analysis, and the focus on ReLU nonlinearity that is pretty popular in practice.",
"The paper is quite hard to read, with many English mistakes and typos.",
"Nevertheless, the analysis seems to be generally correct.",
"The novelty and the key insights are however not always well motivated or presented.",
"And the argument that the work uses realistic assumptions (Gaussian inputs for example) as opposed to other works, is quite debatable actually.",
"Overall, the paper looks like a correct analysis work, but its form is really suboptimal in terms of writing/presentation, and the novelty and relevance of the results are not always very clear, unfortunately.",
"The main results and intuition should be more clearly presented, and details could be moved to appendices for example - that could only help to improve the visibility and impact of these interesting results."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 4,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 8
} | 0.5 | 0 | 0.375 | 0.5 | 0.375 | 0.375 | 0.25 | 0.125 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.5 | 2.105439 | 0.394561 |
Efficient Softmax Approximation for GPUs | We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computational complexity. Our approach further reduces the computational cost by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax. | The authors introduce an adaptive softmax approximation tailored for faster performance on GPUs. The key idea, which is very sensible, is to use a class-based hierarchical softmax, but where the clusters/hierarchy are distributed such that the resulting matrix multiplications are optimally-sized for GPU computation, based on their empirical tests. Their results indicate that the system does indeed work very well. In terms of presentation, I found the paper to have both clear and unclear elements. Fortunately, the underlying concepts and logic seem quite clear. Unfortunately, at various points, the writing is not. There are various minor typos (as mentioned by AnonReviewer2, in addition to some other spots, e.g. the notation describing recurrent network in Section 3 mentions an x_t which is surely different from the x_t used in the previous paragraph on regular feedforward NN*s, i think it belonged in the equation for h_t; the use of the two matrices A and P in Eq2 is strange, etc). Also, while Section 4.2 (Intuition for 2-cluster case) was a good idea to include and helpful, and while the *concepts* underlying the complexity analysis were straightforward, it could be made a lot clearer by (a) adding an additional figure such as Figure 2, along with (b) a few well-placed additional sentences unpacking the logic of the argument into easier-to-follow steps. For example, it was only when I saw Eq (6) and (7) combined with Fig(2) that the analysis on the previous page made more sense in terms of arriving at the eq for the complexity of putting the head of the distribution in the root of the tree. (Perhaps an Appendix might be the most appropriate place to add such an explanation). | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEfficient Softmax Approximation for GPUs\nWe propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computational complexity. Our approach further reduces the computational cost by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors introduce an adaptive softmax approximation tailored for faster performance on GPUs. The key idea, which is very sensible, is to use a class-based hierarchical softmax, but where the clusters/hierarchy are distributed such that the resulting matrix multiplications are optimally-sized for GPU computation, based on their empirical tests. Their results indicate that the system does indeed work very well. In terms of presentation, I found the paper to have both clear and unclear elements. Fortunately, the underlying concepts and logic seem quite clear. Unfortunately, at various points, the writing is not. There are various minor typos (as mentioned by AnonReviewer2, in addition to some other spots, e.g. the notation describing recurrent network in Section 3 mentions an x_t which is surely different from the x_t used in the previous paragraph on regular feedforward NN*s, i think it belonged in the equation for h_t; the use of the two matrices A and P in Eq2 is strange, etc). Also, while Section 4.2 (Intuition for 2-cluster case) was a good idea to include and helpful, and while the *concepts* underlying the complexity analysis were straightforward, it could be made a lot clearer by (a) adding an additional figure such as Figure 2, along with (b) a few well-placed additional sentences unpacking the logic of the argument into easier-to-follow steps. For example, it was only when I saw Eq (6) and (7) combined with Fig(2) that the analysis on the previous page made more sense in terms of arriving at the eq for the complexity of putting the head of the distribution in the root of the tree. (Perhaps an Appendix might be the most appropriate place to add such an explanation).",
"role": "assistant"
}
]
| [
"The authors introduce an adaptive softmax approximation tailored for faster performance on GPUs.",
"The key idea, which is very sensible, is to use a class-based hierarchical softmax, but where the clusters/hierarchy are distributed such that the resulting matrix multiplications are optimally-sized for GPU computation, based on their empirical tests.",
"Their results indicate that the system does indeed work very well.",
"In terms of presentation, I found the paper to have both clear and unclear elements.",
"Fortunately, the underlying concepts and logic seem quite clear.",
"Unfortunately, at various points, the writing is not.",
"There are various minor typos (as mentioned by AnonReviewer2, in addition to some other spots, e.g.",
"the notation describing recurrent network in Section 3 mentions an x_t which is surely different from the x_t used in the previous paragraph on regular feedforward NN*s, i think it belonged in the equation for h_t; the use of the two matrices A and P in Eq2 is strange, etc).",
"Also, while Section 4.2 (Intuition for 2-cluster case) was a good idea to include and helpful, and while the *concepts* underlying the complexity analysis were straightforward, it could be made a lot clearer by (a) adding an additional figure such as Figure 2, along with (b) a few well-placed additional sentences unpacking the logic of the argument into easier-to-follow steps.",
"For example, it was only when I saw Eq (6) and (7) combined with Fig(2) that the analysis on the previous page made more sense in terms of arriving at the eq for the complexity of putting the head of the distribution in the root of the tree.",
"(Perhaps an Appendix might be the most appropriate place to add such an explanation)."
] | {
"criticism": 4,
"example": 4,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 6,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 11
} | 0.363636 | 0.363636 | 0 | 0.363636 | 0.090909 | 0.545455 | 0.272727 | 0.181818 | 11 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.181818 | 2.118688 | 0.06313 |
Dataset Augmentation in Feature Space | Dataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a simpler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating, or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsupervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Working in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data. | The concept of data augmentation in the embedding space is very interesting. The method is well presented and also justified on different tasks such as spoken digits and image recognition etc. One comments of the comparison is the use of a simple 2-layer MLP as the baseline model throughout all the tasks. It*s not clear whether the gains maintain when a more complex baseline model is used. Another comment is that the augmented context vectors are used for classification, just wondering how does it compare to using the reconstructed inputs. And furthermore, as in Table 4, both input and feature space extrapolation improves the performance, whether these two are complementary or not? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDataset Augmentation in Feature Space\nDataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a simpler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating, or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsupervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Working in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe concept of data augmentation in the embedding space is very interesting. The method is well presented and also justified on different tasks such as spoken digits and image recognition etc. One comments of the comparison is the use of a simple 2-layer MLP as the baseline model throughout all the tasks. It*s not clear whether the gains maintain when a more complex baseline model is used. Another comment is that the augmented context vectors are used for classification, just wondering how does it compare to using the reconstructed inputs. And furthermore, as in Table 4, both input and feature space extrapolation improves the performance, whether these two are complementary or not?",
"role": "assistant"
}
]
| [
"The concept of data augmentation in the embedding space is very interesting.",
"The method is well presented and also justified on different tasks such as spoken digits and image recognition etc.",
"One comments of the comparison is the use of a simple 2-layer MLP as the baseline model throughout all the tasks.",
"It*s not clear whether the gains maintain when a more complex baseline model is used.",
"Another comment is that the augmented context vectors are used for classification, just wondering how does it compare to using the reconstructed inputs.",
"And furthermore, as in Table 4, both input and feature space extrapolation improves the performance, whether these two are complementary or not?"
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 6
} | 0.166667 | 0.166667 | 0.166667 | 1 | 0.333333 | 0.333333 | 0.333333 | 0 | 6 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.5 | 1.726661 | 0.773339 |
Online Structure Learning for Sum-Product Networks with Gaussian Leaves | Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable. Those properties follow from some conditions (i.e., completeness and decomposability) that must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques are typically used in practice. This paper describes the first {em online} structure learning technique for continuous SPNs with Gaussian leaves. We also introduce an accompanying new parameter learning technique. | The authors present an online learning method for learning the structure of sum-product networks. The algorithm assumes Gaussian coordinate-wise marginal distributions, and learns both parameters and structure online. The parameters are updated by a recursive procedure that reweights nodes in the network that most contribute to the likelihood of the current data point. The structure learning is done by either merging independent product Gaussian nodes into multivariate leaf nodes, or creating a mixture over the two nodes when the multivariate would be too large. The fact that the dataset is scaled to some larger datasets (in terms of the number of datapoints) is promising, although the number of variables is still quite small. Current benchmarks for tractable continuous density modeling with neural networks include the NICE and Real-NVP families of models, which can be scaled to both large numbers of datapoints and variables. Intractable methods like GAN, GenMMN, VAE have the same property. The main issue with this work for the ICLR audience is the use of mainly a set of SPN-specific datasets that are not used in the deep learning generative modeling literature. The use of GenMMN as a baseline is also not a good choice to bridge the gap to the neural community, as its Parzen-window based likelihood evaluation is not really meaningful. Better ways to evaluate the likelihood through annealed importance sampling are discussed in *On the Quantitative Analysis of Decoder-Based Generative Models* by Wu et al. I would recommend the use of a simple VAE type model to get a lower bound on the likelihood, or something like Real-NVP. Most neural network density models are scalable to large numbers of observations as well as instances, and it is not clear that this method scales well *horizontally* like this. Evaluating the feasibility of modeling something like MNIST would be interesting. SPNs have the strength that not only marginal but also various type of conditional queries are tractable, but performance on this is not evaluated or compared. One interesting application could be in imputation of unknown pixels or color channels in images, for which there is not currently a high-performing tractable model. Despite the disconnect from other ICLR generative modeling literature, the algorithm here seems simple and intuitive and convincingly works better than the previous state of the art for online SPN structure learning. I think VAE is a much better baseline for continuous data than GenMMN when attempting to compare to neural network approaches. Further, the sum-product network could actually be combined with such deep latent variable models as an observation model or posterior, which could be a very powerful combination. I would like it if these SPN models were better known by the ICLR probabilistic modeling community, but I do not know if this paper does enough to make them relevant. As with the other reviewers, I am not an expert on SPNs. However, this seems to be a simple and effective algorithm for online structure induction, and the scalability aspect is something that is important in much recent work in the learning of representations. I think it is good enough for publication, although I would prefer to see many of the above additions to more clearly bridge the gap with other literature in deep generative modeling. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOnline Structure Learning for Sum-Product Networks with Gaussian Leaves\nSum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable. Those properties follow from some conditions (i.e., completeness and decomposability) that must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques are typically used in practice. This paper describes the first {em online} structure learning technique for continuous SPNs with Gaussian leaves. We also introduce an accompanying new parameter learning technique.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present an online learning method for learning the structure of sum-product networks. The algorithm assumes Gaussian coordinate-wise marginal distributions, and learns both parameters and structure online. The parameters are updated by a recursive procedure that reweights nodes in the network that most contribute to the likelihood of the current data point. The structure learning is done by either merging independent product Gaussian nodes into multivariate leaf nodes, or creating a mixture over the two nodes when the multivariate would be too large. The fact that the dataset is scaled to some larger datasets (in terms of the number of datapoints) is promising, although the number of variables is still quite small. Current benchmarks for tractable continuous density modeling with neural networks include the NICE and Real-NVP families of models, which can be scaled to both large numbers of datapoints and variables. Intractable methods like GAN, GenMMN, VAE have the same property. The main issue with this work for the ICLR audience is the use of mainly a set of SPN-specific datasets that are not used in the deep learning generative modeling literature. The use of GenMMN as a baseline is also not a good choice to bridge the gap to the neural community, as its Parzen-window based likelihood evaluation is not really meaningful. Better ways to evaluate the likelihood through annealed importance sampling are discussed in *On the Quantitative Analysis of Decoder-Based Generative Models* by Wu et al. I would recommend the use of a simple VAE type model to get a lower bound on the likelihood, or something like Real-NVP. Most neural network density models are scalable to large numbers of observations as well as instances, and it is not clear that this method scales well *horizontally* like this. Evaluating the feasibility of modeling something like MNIST would be interesting. SPNs have the strength that not only marginal but also various type of conditional queries are tractable, but performance on this is not evaluated or compared. One interesting application could be in imputation of unknown pixels or color channels in images, for which there is not currently a high-performing tractable model. Despite the disconnect from other ICLR generative modeling literature, the algorithm here seems simple and intuitive and convincingly works better than the previous state of the art for online SPN structure learning. I think VAE is a much better baseline for continuous data than GenMMN when attempting to compare to neural network approaches. Further, the sum-product network could actually be combined with such deep latent variable models as an observation model or posterior, which could be a very powerful combination. I would like it if these SPN models were better known by the ICLR probabilistic modeling community, but I do not know if this paper does enough to make them relevant. As with the other reviewers, I am not an expert on SPNs. However, this seems to be a simple and effective algorithm for online structure induction, and the scalability aspect is something that is important in much recent work in the learning of representations. I think it is good enough for publication, although I would prefer to see many of the above additions to more clearly bridge the gap with other literature in deep generative modeling.",
"role": "assistant"
}
]
| [
"The authors present an online learning method for learning the structure of sum-product networks.",
"The algorithm assumes Gaussian coordinate-wise marginal distributions, and learns both parameters and structure online.",
"The parameters are updated by a recursive procedure that reweights nodes in the network that most contribute to the likelihood of the current data point.",
"The structure learning is done by either merging independent product Gaussian nodes into multivariate leaf nodes, or creating a mixture over the two nodes when the multivariate would be too large.",
"The fact that the dataset is scaled to some larger datasets (in terms of the number of datapoints) is promising, although the number of variables is still quite small.",
"Current benchmarks for tractable continuous density modeling with neural networks include the NICE and Real-NVP families of models, which can be scaled to both large numbers of datapoints and variables.",
"Intractable methods like GAN, GenMMN, VAE have the same property.",
"The main issue with this work for the ICLR audience is the use of mainly a set of SPN-specific datasets that are not used in the deep learning generative modeling literature.",
"The use of GenMMN as a baseline is also not a good choice to bridge the gap to the neural community, as its Parzen-window based likelihood evaluation is not really meaningful.",
"Better ways to evaluate the likelihood through annealed importance sampling are discussed in *On the Quantitative Analysis of Decoder-Based Generative Models* by Wu et al.",
"I would recommend the use of a simple VAE type model to get a lower bound on the likelihood, or something like Real-NVP.",
"Most neural network density models are scalable to large numbers of observations as well as instances, and it is not clear that this method scales well *horizontally* like this.",
"Evaluating the feasibility of modeling something like MNIST would be interesting.",
"SPNs have the strength that not only marginal but also various type of conditional queries are tractable, but performance on this is not evaluated or compared.",
"One interesting application could be in imputation of unknown pixels or color channels in images, for which there is not currently a high-performing tractable model.",
"Despite the disconnect from other ICLR generative modeling literature, the algorithm here seems simple and intuitive and convincingly works better than the previous state of the art for online SPN structure learning.",
"I think VAE is a much better baseline for continuous data than GenMMN when attempting to compare to neural network approaches.",
"Further, the sum-product network could actually be combined with such deep latent variable models as an observation model or posterior, which could be a very powerful combination.",
"I would like it if these SPN models were better known by the ICLR probabilistic modeling community, but I do not know if this paper does enough to make them relevant.",
"As with the other reviewers, I am not an expert on SPNs.",
"However, this seems to be a simple and effective algorithm for online structure induction, and the scalability aspect is something that is important in much recent work in the learning of representations.",
"I think it is good enough for publication, although I would prefer to see many of the above additions to more clearly bridge the gap with other literature in deep generative modeling."
] | {
"criticism": 6,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 22,
"praise": 5,
"presentation_and_reporting": 0,
"results_and_discussion": 6,
"suggestion_and_solution": 6,
"total": 22
} | 0.272727 | 0 | 0.181818 | 1 | 0.227273 | 0 | 0.272727 | 0.272727 | 22 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.227273 | 0.948896 | 1.278377 |
Tuning Recurrent Neural Networks with Reinforcement Learning | The approach of training sequence models using supervised learning and next-step prediction suffers from known failure modes. For example, it is notoriously difficult to ensure multi-step generated sequences have coherent global structure. We propose a novel sequence-learning approach in which we use a pre-trained Recurrent Neural Network (RNN) to supply part of the reward value in a Reinforcement Learning (RL) model. Thus, we can refine a sequence predictor by optimizing for some imposed reward functions, while maintaining good predictive properties learned from data. We propose efficient ways to solve this by augmenting deep Q-learning with a cross-entropy reward and deriving novel off-policy methods for RNNs from KL control. We explore the usefulness of our approach in the context of music generation. An LSTM is trained on a large corpus of songs to predict the next note in a musical sequence. This Note RNN is then refined using our method and rules of music theory. We show that by combining maximum likelihood (ML) and RL in this way, we can not only produce more pleasing melodies, but significantly reduce unwanted behaviors and failure modes of the RNN, while maintaining information learned from data. | This paper suggests combining LSTMs, trained on a large midi corpus, with a handcrafted reward function that helps to fine-tune the model in a musically meaningful way. The idea to use hand-crafted rewards in such a way is great and seems promising for practical scenarios, where a musician would like to design a set of rules, rather than a set of melodies. Even though some choices made along the way seem rather ad-hoc and simplistic from a music theoretical perspective, the results sound like an improvement upon the note RNN baseline, but we also don*t know how cherry picked these results are. I am not convinced that this approach will scale to much more complicated reward functions necessary to compose real music. Maybe LSTMs are the wrong approach altogether if they have so much trouble learning to produce pleasant melodies from such a relatively big corpus of data. Aren*t there any alternative differentiable models that are more suitable? What about dilated convolution based approaches? What I don*t like about the paper is that the short melodies are referenced as compositions while being very far from meaningful music, they are not even polyphonic after all. I think it would be great if such papers would be written with the help or feedback of people that have real musical training and are more critical towards these details. What I like about the paper is that the authors make an effort to understand what is going on, table 1 is interesting for instance. However, Figure 3 should have included real melody excerpts with the same sound synthesis/sample setup. Besides that, more discussion on the shortcomings of the presented method should be added. In summary, I do like the paper and idea and I can imagine that such RL based fine-tuning approaches will indeed be useful for musicians. Even though the novelty might be limited, the paper serves as a documentation on how to achieve solid results in practice. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTuning Recurrent Neural Networks with Reinforcement Learning\nThe approach of training sequence models using supervised learning and next-step prediction suffers from known failure modes. For example, it is notoriously difficult to ensure multi-step generated sequences have coherent global structure. We propose a novel sequence-learning approach in which we use a pre-trained Recurrent Neural Network (RNN) to supply part of the reward value in a Reinforcement Learning (RL) model. Thus, we can refine a sequence predictor by optimizing for some imposed reward functions, while maintaining good predictive properties learned from data. We propose efficient ways to solve this by augmenting deep Q-learning with a cross-entropy reward and deriving novel off-policy methods for RNNs from KL control. We explore the usefulness of our approach in the context of music generation. An LSTM is trained on a large corpus of songs to predict the next note in a musical sequence. This Note RNN is then refined using our method and rules of music theory. We show that by combining maximum likelihood (ML) and RL in this way, we can not only produce more pleasing melodies, but significantly reduce unwanted behaviors and failure modes of the RNN, while maintaining information learned from data.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper suggests combining LSTMs, trained on a large midi corpus, with a handcrafted reward function that helps to fine-tune the model in a musically meaningful way. The idea to use hand-crafted rewards in such a way is great and seems promising for practical scenarios, where a musician would like to design a set of rules, rather than a set of melodies. Even though some choices made along the way seem rather ad-hoc and simplistic from a music theoretical perspective, the results sound like an improvement upon the note RNN baseline, but we also don*t know how cherry picked these results are. I am not convinced that this approach will scale to much more complicated reward functions necessary to compose real music. Maybe LSTMs are the wrong approach altogether if they have so much trouble learning to produce pleasant melodies from such a relatively big corpus of data. Aren*t there any alternative differentiable models that are more suitable? What about dilated convolution based approaches? What I don*t like about the paper is that the short melodies are referenced as compositions while being very far from meaningful music, they are not even polyphonic after all. I think it would be great if such papers would be written with the help or feedback of people that have real musical training and are more critical towards these details. What I like about the paper is that the authors make an effort to understand what is going on, table 1 is interesting for instance. However, Figure 3 should have included real melody excerpts with the same sound synthesis/sample setup. Besides that, more discussion on the shortcomings of the presented method should be added. In summary, I do like the paper and idea and I can imagine that such RL based fine-tuning approaches will indeed be useful for musicians. Even though the novelty might be limited, the paper serves as a documentation on how to achieve solid results in practice.",
"role": "assistant"
}
]
| [
"This paper suggests combining LSTMs, trained on a large midi corpus, with a handcrafted reward function that helps to fine-tune the model in a musically meaningful way.",
"The idea to use hand-crafted rewards in such a way is great and seems promising for practical scenarios, where a musician would like to design a set of rules, rather than a set of melodies.",
"Even though some choices made along the way seem rather ad-hoc and simplistic from a music theoretical perspective, the results sound like an improvement upon the note RNN baseline, but we also don*t know how cherry picked these results are.",
"I am not convinced that this approach will scale to much more complicated reward functions necessary to compose real music.",
"Maybe LSTMs are the wrong approach altogether if they have so much trouble learning to produce pleasant melodies from such a relatively big corpus of data.",
"Aren*t there any alternative differentiable models that are more suitable?",
"What about dilated convolution based approaches?",
"What I don*t like about the paper is that the short melodies are referenced as compositions while being very far from meaningful music, they are not even polyphonic after all.",
"I think it would be great if such papers would be written with the help or feedback of people that have real musical training and are more critical towards these details.",
"What I like about the paper is that the authors make an effort to understand what is going on, table 1 is interesting for instance.",
"However, Figure 3 should have included real melody excerpts with the same sound synthesis/sample setup.",
"Besides that, more discussion on the shortcomings of the presented method should be added.",
"In summary, I do like the paper and idea and I can imagine that such RL based fine-tuning approaches will indeed be useful for musicians.",
"Even though the novelty might be limited, the paper serves as a documentation on how to achieve solid results in practice."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 5,
"materials_and_methods": 9,
"praise": 2,
"presentation_and_reporting": 5,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 14
} | 0.285714 | 0.071429 | 0.357143 | 0.642857 | 0.142857 | 0.357143 | 0.214286 | 0.214286 | 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.285714 | 2.269932 | 0.015782 |
Perception Updating Networks: On architectural constraints for interpretable video generative models | We investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents *sprites* or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network. | This paper presents an approach to modeling videos based on a decomposition into a background + 2d sprites with a latent hidden state. The exposition is OK, and I think the approach is sensible, but the main issue with this paper is that it is lacking experiments on non-synthetic datasets. As such, while I find the graphics inspired questions the paper is investigating interesting, I don*t think it is clear that this work introduces useful machinery for modeling more general videos. I think this paper is more appropriate as a workshop contribution in its current form. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPerception Updating Networks: On architectural constraints for interpretable video generative models\nWe investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents *sprites* or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents an approach to modeling videos based on a decomposition into a background + 2d sprites with a latent hidden state. The exposition is OK, and I think the approach is sensible, but the main issue with this paper is that it is lacking experiments on non-synthetic datasets. As such, while I find the graphics inspired questions the paper is investigating interesting, I don*t think it is clear that this work introduces useful machinery for modeling more general videos. I think this paper is more appropriate as a workshop contribution in its current form.",
"role": "assistant"
}
]
| [
"This paper presents an approach to modeling videos based on a decomposition into a background + 2d sprites with a latent hidden state.",
"The exposition is OK, and I think the approach is sensible, but the main issue with this paper is that it is lacking experiments on non-synthetic datasets.",
"As such, while I find the graphics inspired questions the paper is investigating interesting, I don*t think it is clear that this work introduces useful machinery for modeling more general videos.",
"I think this paper is more appropriate as a workshop contribution in its current form."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 4
} | 0.5 | 0 | 0.75 | 0.75 | 0.5 | 0.25 | 0 | 0.25 | 4 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 3 | 1.721623 | 1.278377 |
Neural Data Filter for Bootstrapping Stochastic Gradient Descent | Mini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it emph{ extbf{N}eural extbf{D}ata extbf{F}ilter} ( extbf{NDF}). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding. | Final review: The writers were very responsive and I agree the reviewer2 that their experimental setup is not wrong after all and increased the score by one. But I still think there is lack of experiments and the results are not conclusive. As a reader I am interested in two things, either getting a new insight and understanding something better, or learn a method for a better performance. This paper falls in the category two, but fails to prove it with more throughout and rigorous experiments. In summary the paper lacks experiments and results are inconclusive and I do not believe the proposed method would be quite useful and hence not a conference level publication. -- The paper proposes to train a policy network along the main network for selecting subset of data during training for achieving faster convergence with less data. Pros: It*s well written and straightforward to follow The algorithm has been explained clearly. Cons: Section 2 mentions that the validation accuracy is used as one of the feature vectors for training the NDF. This invalidates the experiments, as the training procedure is using some data from the validation set. Only one dataset has been tested on. Papers such as this one that claim faster convergence rate should be tested on multiple datasets and network architectures to show consistency of results. Especially larger datasets as the proposed methods is going to use less training data at each iteration, it has to be shown in much larger scaler datasets such as Imagenet. As discussed more in detail in the pre-reviews question, if the paper is claiming faster convergence then it has to compare the learning curves with other baselines such Adam. Plain SGD is very unfair comparison as it is almost never used in practice. And this is regardless of what is the black box optimizer they use. The case could be that Adam alone as black box optimizer works as well or better than Adam as black box + NDF. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Data Filter for Bootstrapping Stochastic Gradient Descent\nMini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter} (\textbf{NDF}). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nFinal review: The writers were very responsive and I agree the reviewer2 that their experimental setup is not wrong after all and increased the score by one. But I still think there is lack of experiments and the results are not conclusive. As a reader I am interested in two things, either getting a new insight and understanding something better, or learn a method for a better performance. This paper falls in the category two, but fails to prove it with more throughout and rigorous experiments. In summary the paper lacks experiments and results are inconclusive and I do not believe the proposed method would be quite useful and hence not a conference level publication. -- The paper proposes to train a policy network along the main network for selecting subset of data during training for achieving faster convergence with less data. Pros: It*s well written and straightforward to follow The algorithm has been explained clearly. Cons: Section 2 mentions that the validation accuracy is used as one of the feature vectors for training the NDF. This invalidates the experiments, as the training procedure is using some data from the validation set. Only one dataset has been tested on. Papers such as this one that claim faster convergence rate should be tested on multiple datasets and network architectures to show consistency of results. Especially larger datasets as the proposed methods is going to use less training data at each iteration, it has to be shown in much larger scaler datasets such as Imagenet. As discussed more in detail in the pre-reviews question, if the paper is claiming faster convergence then it has to compare the learning curves with other baselines such Adam. Plain SGD is very unfair comparison as it is almost never used in practice. And this is regardless of what is the black box optimizer they use. The case could be that Adam alone as black box optimizer works as well or better than Adam as black box + NDF.",
"role": "assistant"
}
]
| [
"Final review: The writers were very responsive and I agree the reviewer2 that their experimental setup is not wrong after all and increased the score by one.",
"But I still think there is lack of experiments and the results are not conclusive.",
"As a reader I am interested in two things, either getting a new insight and understanding something better, or learn a method for a better performance.",
"This paper falls in the category two, but fails to prove it with more throughout and rigorous experiments.",
"In summary the paper lacks experiments and results are inconclusive and I do not believe the proposed method would be quite useful and hence not a conference level publication.",
"-- The paper proposes to train a policy network along the main network for selecting subset of data during training for achieving faster convergence with less data.",
"Pros: It*s well written and straightforward to follow The algorithm has been explained clearly.",
"Cons: Section 2 mentions that the validation accuracy is used as one of the feature vectors for training the NDF.",
"This invalidates the experiments, as the training procedure is using some data from the validation set.",
"Only one dataset has been tested on.",
"Papers such as this one that claim faster convergence rate should be tested on multiple datasets and network architectures to show consistency of results.",
"Especially larger datasets as the proposed methods is going to use less training data at each iteration, it has to be shown in much larger scaler datasets such as Imagenet.",
"As discussed more in detail in the pre-reviews question, if the paper is claiming faster convergence then it has to compare the learning curves with other baselines such Adam.",
"Plain SGD is very unfair comparison as it is almost never used in practice.",
"And this is regardless of what is the black box optimizer they use.",
"The case could be that Adam alone as black box optimizer works as well or better than Adam as black box + NDF."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 15,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 16
} | 0.3125 | 0 | 0.125 | 0.9375 | 0.125 | 0.1875 | 0.25 | 0.1875 | 16 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.125 | 1.982958 | 0.142042 |
Surprise-Based Intrinsic Motivation for Deep Reinforcement Learning | Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent*s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the -step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques. | The authors present a novel approach to surprise-based intrinsic motivation in deep reinforcement learning. The authors clearly explain the difference from other recent approaches to intrinsic motivation and back up their method with results from a broad class of discrete and continuous action domains. They present two tractable approximations to their framework - one which ignores the stochasticity of the true environmental dynamics, and one which approximates the rate of information gain (somewhat similar to Schmidhuber*s formal theory of creativity, fun and intrinsic motivation). The results of this exploration bonus when added to TRPO are generally better than standard TRPO. However, I would have appreciated a more thorough comparison against other recent work on intrinsic motivation. For instance, Bellemare et al 2016 recently achieved significant performance gains on challenging Atari games like Montezuma*s Revenge by combining DQN with an exploration bonus, however Montezuma*s Revenge is not presented as an experiment here. Such comparisons would significantly improve the strength of the paper. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSurprise-Based Intrinsic Motivation for Deep Reinforcement Learning\nExploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent*s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the -step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present a novel approach to surprise-based intrinsic motivation in deep reinforcement learning. The authors clearly explain the difference from other recent approaches to intrinsic motivation and back up their method with results from a broad class of discrete and continuous action domains. They present two tractable approximations to their framework - one which ignores the stochasticity of the true environmental dynamics, and one which approximates the rate of information gain (somewhat similar to Schmidhuber*s formal theory of creativity, fun and intrinsic motivation). The results of this exploration bonus when added to TRPO are generally better than standard TRPO. However, I would have appreciated a more thorough comparison against other recent work on intrinsic motivation. For instance, Bellemare et al 2016 recently achieved significant performance gains on challenging Atari games like Montezuma*s Revenge by combining DQN with an exploration bonus, however Montezuma*s Revenge is not presented as an experiment here. Such comparisons would significantly improve the strength of the paper.",
"role": "assistant"
}
]
| [
"The authors present a novel approach to surprise-based intrinsic motivation in deep reinforcement learning.",
"The authors clearly explain the difference from other recent approaches to intrinsic motivation and back up their method with results from a broad class of discrete and continuous action domains.",
"They present two tractable approximations to their framework - one which ignores the stochasticity of the true environmental dynamics, and one which approximates the rate of information gain (somewhat similar to Schmidhuber*s formal theory of creativity, fun and intrinsic motivation).",
"The results of this exploration bonus when added to TRPO are generally better than standard TRPO.",
"However, I would have appreciated a more thorough comparison against other recent work on intrinsic motivation.",
"For instance, Bellemare et al 2016 recently achieved significant performance gains on challenging Atari games like Montezuma*s Revenge by combining DQN with an exploration bonus, however Montezuma*s Revenge is not presented as an experiment here.",
"Such comparisons would significantly improve the strength of the paper."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 7
} | 0.142857 | 0 | 0.428571 | 0.714286 | 0.142857 | 0 | 0.428571 | 0.285714 | 7 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
On Robust Concepts and Small Neural Nets | The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights. However, robust concepts often have small neural networks in practice. We show an efficient analog of the universal approximation theorem on the boolean hypercube in this context. We prove that any noise-stable boolean function on n boolean-valued input variables can be well-approximated by a two-layer linear threshold circuit with a small number of hidden-layer nodes and small weights, that depend only on the noise-stability and approximation parameters, and are independent of n. We also give a polynomial time learning algorithm that outputs a small two-layer linear threshold circuit that approximates such a given function. We also show weaker generalizations of this to noise-stable polynomial threshold functions and noise-stable boolean functions in general. | SUMMARY This paper presents a study of the number of hidden units and training examples needed to learn functions from a particular class. This class is defined as those Boolean functions with an upper bound on the variability of the outputs. PROS The paper promotes interesting results from the theoretical computer science community to investigate the efficiency of representation of functions with limited variability in terms of shallow feedforward networks with linear threshold units. CONS The analysis is limited to shallow networks. The analysis is based on piecing together interesting results, however without contributing significant innovations. The presentation of the main results and conclusions is somewhat obscure, as the therein appearing terms/constants do not express a clear relation between increased robustness and decreasing number of required hidden units. COMMENTS - In the abstract one reads ``The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights.** In page 1 the paper points the reader to a review article. It could be a good idea to include also more recent references. Given the motivation presented in the abstract of the paper it would be a good idea to also comment of works discussing the classes of Boolean functions representable by linear threshold networks. For instance the paper [Hyperplane Arrangements Separating Arbitrary Vertex Classes in n-Cubes. Wenzel, Ay, Paseman] discusses various classes of functions that can be represented by shallow linear threshold networks and provides upper and lower bounds on the number of hidden units needed for representing various types of Boolean functions. In particular that paper also provides lower bounds on the number of hidden units needed to define a universal approximator. - It certainly would be a good idea to discuss the results on the learning complexity in terms of measures such as the VC-dimension. - Thank you for the explanations regarding the constants. So if the noise sensitivity is kept constant, larger values of epsilon are associated with a smaller value of delta and of 1/epsilon. Nonetheless, the description in Theorem 2 is in terms of poly(1/epsilon, 1/delta), which still could increase? Also, in Lemma 1 reducing the sensitivity at a constant noise increases the bound on k? - The fact that the descriptions are independent of n seems to be related to the definition of the noise sensitivity as an expectation over all inputs. This certainly deserves more discussion. One good start could be to discuss examples of functions with an upper bound on the noise sensitivity (aside from the linear threshold functions discussed in Lemma 2). Also, reverse statements to Lemma 1 would be interesting, describing the noise sensitivity of juntas specifically, even if only as simple examples. - On page 3 ``...variables is polynomial in the noise-sensitivity parameters** should be inverse of? MINOR COMMENTS On page 5 Proposition 1 should be Lemma 1? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn Robust Concepts and Small Neural Nets\nThe universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights. However, robust concepts often have small neural networks in practice. We show an efficient analog of the universal approximation theorem on the boolean hypercube in this context. We prove that any noise-stable boolean function on n boolean-valued input variables can be well-approximated by a two-layer linear threshold circuit with a small number of hidden-layer nodes and small weights, that depend only on the noise-stability and approximation parameters, and are independent of n. We also give a polynomial time learning algorithm that outputs a small two-layer linear threshold circuit that approximates such a given function. We also show weaker generalizations of this to noise-stable polynomial threshold functions and noise-stable boolean functions in general.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSUMMARY This paper presents a study of the number of hidden units and training examples needed to learn functions from a particular class. This class is defined as those Boolean functions with an upper bound on the variability of the outputs. PROS The paper promotes interesting results from the theoretical computer science community to investigate the efficiency of representation of functions with limited variability in terms of shallow feedforward networks with linear threshold units. CONS The analysis is limited to shallow networks. The analysis is based on piecing together interesting results, however without contributing significant innovations. The presentation of the main results and conclusions is somewhat obscure, as the therein appearing terms/constants do not express a clear relation between increased robustness and decreasing number of required hidden units. COMMENTS - In the abstract one reads ``The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights.** In page 1 the paper points the reader to a review article. It could be a good idea to include also more recent references. Given the motivation presented in the abstract of the paper it would be a good idea to also comment of works discussing the classes of Boolean functions representable by linear threshold networks. For instance the paper [Hyperplane Arrangements Separating Arbitrary Vertex Classes in n-Cubes. Wenzel, Ay, Paseman] discusses various classes of functions that can be represented by shallow linear threshold networks and provides upper and lower bounds on the number of hidden units needed for representing various types of Boolean functions. In particular that paper also provides lower bounds on the number of hidden units needed to define a universal approximator. - It certainly would be a good idea to discuss the results on the learning complexity in terms of measures such as the VC-dimension. - Thank you for the explanations regarding the constants. So if the noise sensitivity is kept constant, larger values of epsilon are associated with a smaller value of delta and of 1/epsilon. Nonetheless, the description in Theorem 2 is in terms of poly(1/epsilon, 1/delta), which still could increase? Also, in Lemma 1 reducing the sensitivity at a constant noise increases the bound on k? - The fact that the descriptions are independent of n seems to be related to the definition of the noise sensitivity as an expectation over all inputs. This certainly deserves more discussion. One good start could be to discuss examples of functions with an upper bound on the noise sensitivity (aside from the linear threshold functions discussed in Lemma 2). Also, reverse statements to Lemma 1 would be interesting, describing the noise sensitivity of juntas specifically, even if only as simple examples. - On page 3 ``...variables is polynomial in the noise-sensitivity parameters** should be inverse of? MINOR COMMENTS On page 5 Proposition 1 should be Lemma 1?",
"role": "assistant"
}
]
| [
"SUMMARY This paper presents a study of the number of hidden units and training examples needed to learn functions from a particular class.",
"This class is defined as those Boolean functions with an upper bound on the variability of the outputs.",
"PROS The paper promotes interesting results from the theoretical computer science community to investigate the efficiency of representation of functions with limited variability in terms of shallow feedforward networks with linear threshold units.",
"CONS The analysis is limited to shallow networks.",
"The analysis is based on piecing together interesting results, however without contributing significant innovations.",
"The presentation of the main results and conclusions is somewhat obscure, as the therein appearing terms/constants do not express a clear relation between increased robustness and decreasing number of required hidden units.",
"COMMENTS - In the abstract one reads ``The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights.",
"** In page 1 the paper points the reader to a review article.",
"It could be a good idea to include also more recent references.",
"Given the motivation presented in the abstract of the paper it would be a good idea to also comment of works discussing the classes of Boolean functions representable by linear threshold networks.",
"For instance the paper [Hyperplane Arrangements Separating Arbitrary Vertex Classes in n-Cubes.",
"Wenzel, Ay, Paseman] discusses various classes of functions that can be represented by shallow linear threshold networks and provides upper and lower bounds on the number of hidden units needed for representing various types of Boolean functions.",
"In particular that paper also provides lower bounds on the number of hidden units needed to define a universal approximator.",
"- It certainly would be a good idea to discuss the results on the learning complexity in terms of measures such as the VC-dimension.",
"- Thank you for the explanations regarding the constants.",
"So if the noise sensitivity is kept constant, larger values of epsilon are associated with a smaller value of delta and of 1/epsilon.",
"Nonetheless, the description in Theorem 2 is in terms of poly(1/epsilon, 1/delta), which still could increase?",
"Also, in Lemma 1 reducing the sensitivity at a constant noise increases the bound on k?",
"- The fact that the descriptions are independent of n seems to be related to the definition of the noise sensitivity as an expectation over all inputs.",
"This certainly deserves more discussion.",
"One good start could be to discuss examples of functions with an upper bound on the noise sensitivity (aside from the linear threshold functions discussed in Lemma 2).",
"Also, reverse statements to Lemma 1 would be interesting, describing the noise sensitivity of juntas specifically, even if only as simple examples.",
"- On page 3 ``...variables is polynomial in the noise-sensitivity parameters** should be inverse of?",
"MINOR COMMENTS On page 5 Proposition 1 should be Lemma 1?"
] | {
"criticism": 3,
"example": 3,
"importance_and_relevance": 4,
"materials_and_methods": 11,
"praise": 7,
"presentation_and_reporting": 6,
"results_and_discussion": 12,
"suggestion_and_solution": 8,
"total": 24
} | 0.125 | 0.125 | 0.166667 | 0.458333 | 0.291667 | 0.25 | 0.5 | 0.333333 | 24 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 0.340326 | 1.909674 |
Semi-supervised deep learning by metric embedding | Deep networks are successfully used as classification models yielding state-of-the-art results when trained on a large number of labeled samples. These models, however, are usually much less suited for semi-supervised problems because of their tendency to overfit easily when trained on small amounts of data. In this work we will explore a new training objective that is targeting a semi-supervised regime with only a small subset of labeled data. This criterion is based on a deep metric embedding over distance relations within the set of labeled samples, together with constraints over the embeddings of the unlabeled set. The final learned representations are discriminative in euclidean space, and hence can be used with subsequent nearest-neighbor classification using the labeled samples. | The authors propose a semi-supervised technique for neural networks which includes two objectives: (1) the neural net embeddings of two samples with identical labels is constrained to be closer than the embeddings of samples with different labels (2) the embedding of an unlabeled example is constrained to be close to the embeddings of the closest labeled sample (and far away from the other ones). While the authors list a number of previous works, they do not relate them very well with their approach, and actual differences appear unclear. In particular, the approach seems rather incremental with respect to (Hadsell et al. 2006), the way the neighbors are chosen being the main difference; in that respect, (Weston et al, 2008 or 2012) (which could be viewed as an application of DrLim from Hadsell et al, applied to semi-supervised learning) is even closer to the approach proposed here. There is also no balancing constraint in the proposed approach, which was known to be crucial in all these models from the 2000s. Concerning the experimental results, reported performance are very good; given the approach is very close to previous work, it is hard to know if good performance come from better neural net architectures or something else. This should be clarified. In summary, the novelty is not clearly defined in this paper; differences with existing literature should be highlighted. Experimental results are very good, but it is hard to know where comes the difference in performance with previous (very related) work. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemi-supervised deep learning by metric embedding\nDeep networks are successfully used as classification models yielding state-of-the-art results when trained on a large number of labeled samples. These models, however, are usually much less suited for semi-supervised problems because of their tendency to overfit easily when trained on small amounts of data. In this work we will explore a new training objective that is targeting a semi-supervised regime with only a small subset of labeled data. This criterion is based on a deep metric embedding over distance relations within the set of labeled samples, together with constraints over the embeddings of the unlabeled set. The final learned representations are discriminative in euclidean space, and hence can be used with subsequent nearest-neighbor classification using the labeled samples.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a semi-supervised technique for neural networks which includes two objectives: (1) the neural net embeddings of two samples with identical labels is constrained to be closer than the embeddings of samples with different labels (2) the embedding of an unlabeled example is constrained to be close to the embeddings of the closest labeled sample (and far away from the other ones). While the authors list a number of previous works, they do not relate them very well with their approach, and actual differences appear unclear. In particular, the approach seems rather incremental with respect to (Hadsell et al. 2006), the way the neighbors are chosen being the main difference; in that respect, (Weston et al, 2008 or 2012) (which could be viewed as an application of DrLim from Hadsell et al, applied to semi-supervised learning) is even closer to the approach proposed here. There is also no balancing constraint in the proposed approach, which was known to be crucial in all these models from the 2000s. Concerning the experimental results, reported performance are very good; given the approach is very close to previous work, it is hard to know if good performance come from better neural net architectures or something else. This should be clarified. In summary, the novelty is not clearly defined in this paper; differences with existing literature should be highlighted. Experimental results are very good, but it is hard to know where comes the difference in performance with previous (very related) work.",
"role": "assistant"
}
]
| [
"The authors propose a semi-supervised technique for neural networks which includes two objectives: (1) the neural net embeddings of two samples with identical labels is constrained to be closer than the embeddings of samples with different labels (2) the embedding of an unlabeled example is constrained to be close to the embeddings of the closest labeled sample (and far away from the other ones).",
"While the authors list a number of previous works, they do not relate them very well with their approach, and actual differences appear unclear.",
"In particular, the approach seems rather incremental with respect to (Hadsell et al.",
"2006), the way the neighbors are chosen being the main difference; in that respect, (Weston et al, 2008 or 2012) (which could be viewed as an application of DrLim from Hadsell et al, applied to semi-supervised learning) is even closer to the approach proposed here.",
"There is also no balancing constraint in the proposed approach, which was known to be crucial in all these models from the 2000s.",
"Concerning the experimental results, reported performance are very good; given the approach is very close to previous work, it is hard to know if good performance come from better neural net architectures or something else.",
"This should be clarified.",
"In summary, the novelty is not clearly defined in this paper; differences with existing literature should be highlighted.",
"Experimental results are very good, but it is hard to know where comes the difference in performance with previous (very related) work."
] | {
"criticism": 6,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 9
} | 0.666667 | 0 | 0.111111 | 0.555556 | 0.222222 | 0.222222 | 0.222222 | 0.222222 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.222222 | 1.969703 | 0.252519 |
Modularized Morphing of Neural Networks | In this work we study the problem of network morphism, an effective learning scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i.e., how a convolutional layer can be morphed into an arbitrary module of a neural network. To simplify the representation of a network, we abstract a module as a graph with blobs as vertices and convolutional layers as edges, based on which the morphing process is able to be formulated as a graph transformation problem. Two atomic morphing operations are introduced to compose the graphs, based on which modules are classified into two families, i.e., simple morphable modules and complex modules. We present practical morphing solutions for both of these two families, and prove that any reasonable module can be morphed from a single convolutional layer. Extensive experiments have been conducted based on the state-of-the-art ResNet on benchmark datasets, and the effectiveness of the proposed solution has been verified. | This paper presents works on neural network / CNN architecture morphing. Results are not reported on ImageNet larger ResNet and new network architecture such as Xception and DenseNet - which are maybe too new! Also most results are reported in small datasets and network, which do not offer confidence in the usability in production systems. My biggest issue is that computational time and effort for these techniques is not mentioned in detail. We always want to be able to quantify the extra effort of understanding and using a new technique, especially if the results are minor. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nModularized Morphing of Neural Networks\nIn this work we study the problem of network morphism, an effective learning scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i.e., how a convolutional layer can be morphed into an arbitrary module of a neural network. To simplify the representation of a network, we abstract a module as a graph with blobs as vertices and convolutional layers as edges, based on which the morphing process is able to be formulated as a graph transformation problem. Two atomic morphing operations are introduced to compose the graphs, based on which modules are classified into two families, i.e., simple morphable modules and complex modules. We present practical morphing solutions for both of these two families, and prove that any reasonable module can be morphed from a single convolutional layer. Extensive experiments have been conducted based on the state-of-the-art ResNet on benchmark datasets, and the effectiveness of the proposed solution has been verified.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents works on neural network / CNN architecture morphing. Results are not reported on ImageNet larger ResNet and new network architecture such as Xception and DenseNet - which are maybe too new! Also most results are reported in small datasets and network, which do not offer confidence in the usability in production systems. My biggest issue is that computational time and effort for these techniques is not mentioned in detail. We always want to be able to quantify the extra effort of understanding and using a new technique, especially if the results are minor.",
"role": "assistant"
}
]
| [
"This paper presents works on neural network / CNN architecture morphing.",
"Results are not reported on ImageNet larger ResNet and new network architecture such as Xception and DenseNet - which are maybe too new!",
"Also most results are reported in small datasets and network, which do not offer confidence in the usability in production systems.",
"My biggest issue is that computational time and effort for these techniques is not mentioned in detail.",
"We always want to be able to quantify the extra effort of understanding and using a new technique, especially if the results are minor."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 4,
"praise": 0,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 0,
"total": 5
} | 0.6 | 0 | 0.2 | 0.8 | 0 | 0 | 0.6 | 0 | 5 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 2.2 | 1.189924 | 1.010076 |
Modularized Morphing of Neural Networks | In this work we study the problem of network morphism, an effective learning scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i.e., how a convolutional layer can be morphed into an arbitrary module of a neural network. To simplify the representation of a network, we abstract a module as a graph with blobs as vertices and convolutional layers as edges, based on which the morphing process is able to be formulated as a graph transformation problem. Two atomic morphing operations are introduced to compose the graphs, based on which modules are classified into two families, i.e., simple morphable modules and complex modules. We present practical morphing solutions for both of these two families, and prove that any reasonable module can be morphed from a single convolutional layer. Extensive experiments have been conducted based on the state-of-the-art ResNet on benchmark datasets, and the effectiveness of the proposed solution has been verified. | The paper proposes a methodology for morphing a trained network to different architecture without having to retrain from scratch. The manuscript reads well and the description is easy to follow. However, the results are not very convincing as the selected baselines are considerably far from the state of the art. The paper should include comparisons with state of the art, for example wide residual networks. Tables should also report number of parameters for each architecture, this would help fair comparison. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nModularized Morphing of Neural Networks\nIn this work we study the problem of network morphism, an effective learning scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i.e., how a convolutional layer can be morphed into an arbitrary module of a neural network. To simplify the representation of a network, we abstract a module as a graph with blobs as vertices and convolutional layers as edges, based on which the morphing process is able to be formulated as a graph transformation problem. Two atomic morphing operations are introduced to compose the graphs, based on which modules are classified into two families, i.e., simple morphable modules and complex modules. We present practical morphing solutions for both of these two families, and prove that any reasonable module can be morphed from a single convolutional layer. Extensive experiments have been conducted based on the state-of-the-art ResNet on benchmark datasets, and the effectiveness of the proposed solution has been verified.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a methodology for morphing a trained network to different architecture without having to retrain from scratch. The manuscript reads well and the description is easy to follow. However, the results are not very convincing as the selected baselines are considerably far from the state of the art. The paper should include comparisons with state of the art, for example wide residual networks. Tables should also report number of parameters for each architecture, this would help fair comparison.",
"role": "assistant"
}
]
| [
"The paper proposes a methodology for morphing a trained network to different architecture without having to retrain from scratch.",
"The manuscript reads well and the description is easy to follow.",
"However, the results are not very convincing as the selected baselines are considerably far from the state of the art.",
"The paper should include comparisons with state of the art, for example wide residual networks.",
"Tables should also report number of parameters for each architecture, this would help fair comparison."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 5
} | 0.2 | 0 | 0 | 0.8 | 0.2 | 0.4 | 0.2 | 0.4 | 5 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.2 | 1.189924 | 1.010076 |
Extrapolation and learning equations | In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. | The authors attempt to extract analytical equations governing physical systems from observations - an important task. Being able to capture succinct and interpretable rules which a physical system follows is of great importance. However, the authors do this with simple and naive tools which will not scale to complex tasks, offering no new insights or advances to the field. The contribution of the paper (and the first four pages of the submission!) can be summarised in one sentence: *Learn the weights of a small network with cosine, sinusoid, and input elements products activation functions s.t. the weights are sparse (L1)*. The learnt network weights with its fixed structure are then presented as the learnt equation. This research uses tools from literature from the *90s (I haven*t seen the abbreviation ANN (page 3) for a long time) and does not build on modern techniques which have advanced a lot since then. I would encourage the authors to review modern literature and continue working on this important task. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExtrapolation and learning equations\nIn classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors attempt to extract analytical equations governing physical systems from observations - an important task. Being able to capture succinct and interpretable rules which a physical system follows is of great importance. However, the authors do this with simple and naive tools which will not scale to complex tasks, offering no new insights or advances to the field. The contribution of the paper (and the first four pages of the submission!) can be summarised in one sentence: *Learn the weights of a small network with cosine, sinusoid, and input elements products activation functions s.t. the weights are sparse (L1)*. The learnt network weights with its fixed structure are then presented as the learnt equation. This research uses tools from literature from the *90s (I haven*t seen the abbreviation ANN (page 3) for a long time) and does not build on modern techniques which have advanced a lot since then. I would encourage the authors to review modern literature and continue working on this important task.",
"role": "assistant"
}
]
| [
"The authors attempt to extract analytical equations governing physical systems from observations - an important task.",
"Being able to capture succinct and interpretable rules which a physical system follows is of great importance.",
"However, the authors do this with simple and naive tools which will not scale to complex tasks, offering no new insights or advances to the field.",
"The contribution of the paper (and the first four pages of the submission!)",
"can be summarised in one sentence: *Learn the weights of a small network with cosine, sinusoid, and input elements products activation functions s.t.",
"the weights are sparse (L1)*.",
"The learnt network weights with its fixed structure are then presented as the learnt equation.",
"This research uses tools from literature from the *90s (I haven*t seen the abbreviation ANN (page 3) for a long time) and does not build on modern techniques which have advanced a lot since then.",
"I would encourage the authors to review modern literature and continue working on this important task."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 9
} | 0.111111 | 0 | 0.555556 | 0.666667 | 0.333333 | 0.333333 | 0 | 0.222222 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 2.222222 | 1.969703 | 0.252519 |
Charged Point Normalization: An Efficient Solution to the Saddle Point Problem | Recently, the problem of local minima in very high dimensional non-convex optimization has been challenged and the problem of saddle points has been introduced. This paper introduces a dynamic type of normalization that forces the system to escape saddle points. Unlike other saddle point escaping algorithms, second order information is not utilized, and the system can be trained with an arbitrary gradient descent learner. The system drastically improves learning in a range of deep neural networks on various data-sets in comparison to non-CPN neural networks. | This paper proposes a novel method for accelerating optimization near saddle points. The basic idea is to repel the current parameter vector from a running average of recent parameter values. This method is shown to optimize faster than a variety of other methods in a variety of datasets and architectures. On the surface, the proposed method seems extremely close to momentum. It would be very valuable to think of a clear diagram illustrating how it differs from momentum and why it might be better near a saddle point. The illustration of better convergence on the toy saddle example is not what I mean here—optimization speed comparisons are always difficult due to the many details and hyper parameters involved, so seeing it work faster in one specific application is not as useful as a conceptual diagram which shows a critical case where CPN will behave differently from—and clearly qualitatively better than—momentum. Another way of getting at the relationship to momentum would be to try to find a form for R_t(f) that yields the exact momentum update. You could then compare this with the R_t(f) used in CPN. The overly general notation etc should be dropped—Eqn 8 is enough. The theoretical results (Eqn 1 and Thm 1) should be removed, they are irrelevant until the joint density can be specified. Experimentally, it would be valuable to standardize the results to allow comparison to other methods. For instance, recreating Figure 4 of Dauphin et al, but engaging the CPN method rather than SFN, would clearly demonstrate that CPN can escape something that momentum cannot. I think the idea here is potentially very valuable, but needs more rigorous comparison and a clear relation to momentum and other work. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCharged Point Normalization: An Efficient Solution to the Saddle Point Problem\nRecently, the problem of local minima in very high dimensional non-convex optimization has been challenged and the problem of saddle points has been introduced. This paper introduces a dynamic type of normalization that forces the system to escape saddle points. Unlike other saddle point escaping algorithms, second order information is not utilized, and the system can be trained with an arbitrary gradient descent learner. The system drastically improves learning in a range of deep neural networks on various data-sets in comparison to non-CPN neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a novel method for accelerating optimization near saddle points. The basic idea is to repel the current parameter vector from a running average of recent parameter values. This method is shown to optimize faster than a variety of other methods in a variety of datasets and architectures. On the surface, the proposed method seems extremely close to momentum. It would be very valuable to think of a clear diagram illustrating how it differs from momentum and why it might be better near a saddle point. The illustration of better convergence on the toy saddle example is not what I mean here—optimization speed comparisons are always difficult due to the many details and hyper parameters involved, so seeing it work faster in one specific application is not as useful as a conceptual diagram which shows a critical case where CPN will behave differently from—and clearly qualitatively better than—momentum. Another way of getting at the relationship to momentum would be to try to find a form for R_t(f) that yields the exact momentum update. You could then compare this with the R_t(f) used in CPN. The overly general notation etc should be dropped—Eqn 8 is enough. The theoretical results (Eqn 1 and Thm 1) should be removed, they are irrelevant until the joint density can be specified. Experimentally, it would be valuable to standardize the results to allow comparison to other methods. For instance, recreating Figure 4 of Dauphin et al, but engaging the CPN method rather than SFN, would clearly demonstrate that CPN can escape something that momentum cannot. I think the idea here is potentially very valuable, but needs more rigorous comparison and a clear relation to momentum and other work.",
"role": "assistant"
}
]
| [
"This paper proposes a novel method for accelerating optimization near saddle points.",
"The basic idea is to repel the current parameter vector from a running average of recent parameter values.",
"This method is shown to optimize faster than a variety of other methods in a variety of datasets and architectures.",
"On the surface, the proposed method seems extremely close to momentum.",
"It would be very valuable to think of a clear diagram illustrating how it differs from momentum and why it might be better near a saddle point.",
"The illustration of better convergence on the toy saddle example is not what I mean here—optimization speed comparisons are always difficult due to the many details and hyper parameters involved, so seeing it work faster in one specific application is not as useful as a conceptual diagram which shows a critical case where CPN will behave differently from—and clearly qualitatively better than—momentum.",
"Another way of getting at the relationship to momentum would be to try to find a form for R_t(f) that yields the exact momentum update.",
"You could then compare this with the R_t(f) used in CPN.",
"The overly general notation etc should be dropped—Eqn 8 is enough.",
"The theoretical results (Eqn 1 and Thm 1) should be removed, they are irrelevant until the joint density can be specified.",
"Experimentally, it would be valuable to standardize the results to allow comparison to other methods.",
"For instance, recreating Figure 4 of Dauphin et al, but engaging the CPN method rather than SFN, would clearly demonstrate that CPN can escape something that momentum cannot.",
"I think the idea here is potentially very valuable, but needs more rigorous comparison and a clear relation to momentum and other work."
] | {
"criticism": 2,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 8,
"praise": 1,
"presentation_and_reporting": 4,
"results_and_discussion": 5,
"suggestion_and_solution": 8,
"total": 13
} | 0.153846 | 0.153846 | 0.153846 | 0.615385 | 0.076923 | 0.307692 | 0.384615 | 0.615385 | 13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.461538 | 2.461538 | 0 |
Energy-Based Spherical Sparse Coding | In this paper, we explore an efficient variant of convolutional sparse coding with unit norm code vectors and reconstructions are evaluated using an inner product (cosine distance). To use these codes for discriminative classification, we describe a model we term Energy-Based Spherical Sparse Coding (EB-SSC) in which the hypothesized class label introduces a learned linear bias into the coding step. We evaluate and visualize performance of stacking this encoder to make a deep layered model for image classification. | The paper introduces an efficient variant of sparse coding and uses it as a building block in CNNs for image classification. The coding method incorporates both the input signal reconstruction objective as well as top down information from a class label. The proposed block is evaluated against the recently proposed CReLU activation block. Positives: The proposed method seems technically sound, and it introduces a new way to efficiently train a CNN layer-wise by combining reconstruction and discriminative objectives. Negatives: The performance gain (in terms of classification accuracy) over the previous state-of-the-art is not clear. Using only one dataset (CIFAR-10), the proposed method performs slightly better than the CRelu baseline, but the improvement is quite small (0.5% in the test set). The paper can be strengthened if the authors can demonstrate that the proposed method can be generally applicable to various CNN architectures and datasets with clear and consistent performance gains over strong CNN baselines. Without such results, the practical significance of this work seems unclear. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEnergy-Based Spherical Sparse Coding\nIn this paper, we explore an efficient variant of convolutional sparse coding with unit norm code vectors and reconstructions are evaluated using an inner product (cosine distance). To use these codes for discriminative classification, we describe a model we term Energy-Based Spherical Sparse Coding (EB-SSC) in which the hypothesized class label introduces a learned linear bias into the coding step. We evaluate and visualize performance of stacking this encoder to make a deep layered model for image classification.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper introduces an efficient variant of sparse coding and uses it as a building block in CNNs for image classification. The coding method incorporates both the input signal reconstruction objective as well as top down information from a class label. The proposed block is evaluated against the recently proposed CReLU activation block. Positives: The proposed method seems technically sound, and it introduces a new way to efficiently train a CNN layer-wise by combining reconstruction and discriminative objectives. Negatives: The performance gain (in terms of classification accuracy) over the previous state-of-the-art is not clear. Using only one dataset (CIFAR-10), the proposed method performs slightly better than the CRelu baseline, but the improvement is quite small (0.5% in the test set). The paper can be strengthened if the authors can demonstrate that the proposed method can be generally applicable to various CNN architectures and datasets with clear and consistent performance gains over strong CNN baselines. Without such results, the practical significance of this work seems unclear.",
"role": "assistant"
}
]
| [
"The paper introduces an efficient variant of sparse coding and uses it as a building block in CNNs for image classification.",
"The coding method incorporates both the input signal reconstruction objective as well as top down information from a class label.",
"The proposed block is evaluated against the recently proposed CReLU activation block.",
"Positives: The proposed method seems technically sound, and it introduces a new way to efficiently train a CNN layer-wise by combining reconstruction and discriminative objectives.",
"Negatives: The performance gain (in terms of classification accuracy) over the previous state-of-the-art is not clear.",
"Using only one dataset (CIFAR-10), the proposed method performs slightly better than the CRelu baseline, but the improvement is quite small (0.5% in the test set).",
"The paper can be strengthened if the authors can demonstrate that the proposed method can be generally applicable to various CNN architectures and datasets with clear and consistent performance gains over strong CNN baselines.",
"Without such results, the practical significance of this work seems unclear."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 7,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 8
} | 0.375 | 0 | 0.25 | 0.875 | 0.125 | 0.25 | 0.125 | 0.125 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
Near-Data Processing for Machine Learning | In computer architecture, near-data processing (NDP) refers to augmenting the memory or the storage with processing power so that it can process the data stored therein. By offloading the computational burden of CPU and saving the need for transferring raw data in its entirety, NDP exhibits a great potential for acceleration and power reduction. Despite this potential, specific research activities on NDP have witnessed only limited success until recently, often owing to performance mismatches between logic and memory process technologies that put a limit on the processing capability of memory. Recently, there have been two major changes in the game, igniting the resurgence of NDP with renewed interest. The first is the success of machine learning (ML), which often demands a great deal of computation for training, requiring frequent transfers of big data. The second is the advent of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing. Sparked by these application needs and technological support, we evaluate the potential of NDP for ML using a new SSD platform that allows us to simulate in-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) is a full-fledged simulator of a realistic multi-channel SSD that can execute various ML algorithms using the data stored in the SSD. For thorough performance analysis and in-depth comparison with alternatives, we focus on a specific algorithm: stochastic gradient decent (SGD), which is the de facto standard for training differentiable learning machines including deep neural networks. We implement and compare three variants of SGD (synchronous, Downpour, and elastic averaging) using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. In addition, we compare the performance of ISP and that of conventional in-host processing, revealing the advantages of ISP. Based on the advantages and limitations identified through our experiments, we further discuss directions for future research on ISP for accelerating ML. | For more than a decade, near data processing has been a key requirement for large scale linear learning platforms, as the time to load the data exceeds the learning time, and this has justified the introduction of approaches such as Spark Deep learning usually deals with the data that can be contained in a single machine and the bottleneck is often the CPU-GPU bus or the GPU-GPU-bus, so a method that overcomes this bottleneck could be relevant. Unfortunately, this work is still very preliminary and limited to linear training algorithms, so of little interest yet to ICLR readership. I would recommend publication to a conference where it can reach the large-scale linear ML audience first, such as ICML. This paper is clear and well written in the present form and would probably mostly need a proper benchmark on a large scale linear task. Obviously, when the authors have convincing DNN learning simulations, they are welcome to target ICLR, but can the flash memory FPGA handle it? For experiments, the choice of MNIST is somewhat bizarre: this task is small and performance is notoriously terrible when using linear approaches (the authors do not even report it) | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNear-Data Processing for Machine Learning\nIn computer architecture, near-data processing (NDP) refers to augmenting the memory or the storage with processing power so that it can process the data stored therein. By offloading the computational burden of CPU and saving the need for transferring raw data in its entirety, NDP exhibits a great potential for acceleration and power reduction. Despite this potential, specific research activities on NDP have witnessed only limited success until recently, often owing to performance mismatches between logic and memory process technologies that put a limit on the processing capability of memory. Recently, there have been two major changes in the game, igniting the resurgence of NDP with renewed interest. The first is the success of machine learning (ML), which often demands a great deal of computation for training, requiring frequent transfers of big data. The second is the advent of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing. Sparked by these application needs and technological support, we evaluate the potential of NDP for ML using a new SSD platform that allows us to simulate in-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) is a full-fledged simulator of a realistic multi-channel SSD that can execute various ML algorithms using the data stored in the SSD. For thorough performance analysis and in-depth comparison with alternatives, we focus on a specific algorithm: stochastic gradient decent (SGD), which is the de facto standard for training differentiable learning machines including deep neural networks. We implement and compare three variants of SGD (synchronous, Downpour, and elastic averaging) using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. In addition, we compare the performance of ISP and that of conventional in-host processing, revealing the advantages of ISP. Based on the advantages and limitations identified through our experiments, we further discuss directions for future research on ISP for accelerating ML.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nFor more than a decade, near data processing has been a key requirement for large scale linear learning platforms, as the time to load the data exceeds the learning time, and this has justified the introduction of approaches such as Spark Deep learning usually deals with the data that can be contained in a single machine and the bottleneck is often the CPU-GPU bus or the GPU-GPU-bus, so a method that overcomes this bottleneck could be relevant. Unfortunately, this work is still very preliminary and limited to linear training algorithms, so of little interest yet to ICLR readership. I would recommend publication to a conference where it can reach the large-scale linear ML audience first, such as ICML. This paper is clear and well written in the present form and would probably mostly need a proper benchmark on a large scale linear task. Obviously, when the authors have convincing DNN learning simulations, they are welcome to target ICLR, but can the flash memory FPGA handle it? For experiments, the choice of MNIST is somewhat bizarre: this task is small and performance is notoriously terrible when using linear approaches (the authors do not even report it)",
"role": "assistant"
}
]
| [
"For more than a decade, near data processing has been a key requirement for large scale linear learning platforms, as the time to load the data exceeds the learning time, and this has justified the introduction of approaches such as Spark Deep learning usually deals with the data that can be contained in a single machine and the bottleneck is often the CPU-GPU bus or the GPU-GPU-bus, so a method that overcomes this bottleneck could be relevant.",
"Unfortunately, this work is still very preliminary and limited to linear training algorithms, so of little interest yet to ICLR readership.",
"I would recommend publication to a conference where it can reach the large-scale linear ML audience first, such as ICML.",
"This paper is clear and well written in the present form and would probably mostly need a proper benchmark on a large scale linear task.",
"Obviously, when the authors have convincing DNN learning simulations, they are welcome to target ICLR, but can the flash memory FPGA handle it?",
"For experiments, the choice of MNIST is somewhat bizarre: this task is small and performance is notoriously terrible when using linear approaches (the authors do not even report it)"
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 0,
"suggestion_and_solution": 3,
"total": 6
} | 0.333333 | 0 | 0.333333 | 1 | 0.166667 | 0.166667 | 0 | 0.5 | 6 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 2.5 | 1.726661 | 0.773339 |
Near-Data Processing for Machine Learning | In computer architecture, near-data processing (NDP) refers to augmenting the memory or the storage with processing power so that it can process the data stored therein. By offloading the computational burden of CPU and saving the need for transferring raw data in its entirety, NDP exhibits a great potential for acceleration and power reduction. Despite this potential, specific research activities on NDP have witnessed only limited success until recently, often owing to performance mismatches between logic and memory process technologies that put a limit on the processing capability of memory. Recently, there have been two major changes in the game, igniting the resurgence of NDP with renewed interest. The first is the success of machine learning (ML), which often demands a great deal of computation for training, requiring frequent transfers of big data. The second is the advent of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing. Sparked by these application needs and technological support, we evaluate the potential of NDP for ML using a new SSD platform that allows us to simulate in-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) is a full-fledged simulator of a realistic multi-channel SSD that can execute various ML algorithms using the data stored in the SSD. For thorough performance analysis and in-depth comparison with alternatives, we focus on a specific algorithm: stochastic gradient decent (SGD), which is the de facto standard for training differentiable learning machines including deep neural networks. We implement and compare three variants of SGD (synchronous, Downpour, and elastic averaging) using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. In addition, we compare the performance of ISP and that of conventional in-host processing, revealing the advantages of ISP. Based on the advantages and limitations identified through our experiments, we further discuss directions for future research on ISP for accelerating ML. | Combining storage and processing capabilities is an interesting research topic because data transfer is a major issue for many machine learning tasks. The paper itself is well-written, but unfortunately addresses a lot of things only to medium depth (probably due length constraints). My opinion is that a journal with an in-depth discussion of the technical details would be a better target for this paper. Even though the researchers took an interesting approach to evaluate the performance of the system, it*s difficult for me to grasp the expected practical improvements of this approach. With such a big focus on GPU (and more specialized hardware such as TPUs), the one question that comes to mind: By how much does this - or do you expect it to - beat the latest and greatest GPU on a real task? I don*t consider myself an expert on this topic even though I have some experience with SystemC. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNear-Data Processing for Machine Learning\nIn computer architecture, near-data processing (NDP) refers to augmenting the memory or the storage with processing power so that it can process the data stored therein. By offloading the computational burden of CPU and saving the need for transferring raw data in its entirety, NDP exhibits a great potential for acceleration and power reduction. Despite this potential, specific research activities on NDP have witnessed only limited success until recently, often owing to performance mismatches between logic and memory process technologies that put a limit on the processing capability of memory. Recently, there have been two major changes in the game, igniting the resurgence of NDP with renewed interest. The first is the success of machine learning (ML), which often demands a great deal of computation for training, requiring frequent transfers of big data. The second is the advent of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing. Sparked by these application needs and technological support, we evaluate the potential of NDP for ML using a new SSD platform that allows us to simulate in-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) is a full-fledged simulator of a realistic multi-channel SSD that can execute various ML algorithms using the data stored in the SSD. For thorough performance analysis and in-depth comparison with alternatives, we focus on a specific algorithm: stochastic gradient decent (SGD), which is the de facto standard for training differentiable learning machines including deep neural networks. We implement and compare three variants of SGD (synchronous, Downpour, and elastic averaging) using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. In addition, we compare the performance of ISP and that of conventional in-host processing, revealing the advantages of ISP. Based on the advantages and limitations identified through our experiments, we further discuss directions for future research on ISP for accelerating ML.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nCombining storage and processing capabilities is an interesting research topic because data transfer is a major issue for many machine learning tasks. The paper itself is well-written, but unfortunately addresses a lot of things only to medium depth (probably due length constraints). My opinion is that a journal with an in-depth discussion of the technical details would be a better target for this paper. Even though the researchers took an interesting approach to evaluate the performance of the system, it*s difficult for me to grasp the expected practical improvements of this approach. With such a big focus on GPU (and more specialized hardware such as TPUs), the one question that comes to mind: By how much does this - or do you expect it to - beat the latest and greatest GPU on a real task? I don*t consider myself an expert on this topic even though I have some experience with SystemC.",
"role": "assistant"
}
]
| [
"Combining storage and processing capabilities is an interesting research topic because data transfer is a major issue for many machine learning tasks.",
"The paper itself is well-written, but unfortunately addresses a lot of things only to medium depth (probably due length constraints).",
"My opinion is that a journal with an in-depth discussion of the technical details would be a better target for this paper.",
"Even though the researchers took an interesting approach to evaluate the performance of the system, it*s difficult for me to grasp the expected practical improvements of this approach.",
"With such a big focus on GPU (and more specialized hardware such as TPUs), the one question that comes to mind: By how much does this - or do you expect it to - beat the latest and greatest GPU on a real task?",
"I don*t consider myself an expert on this topic even though I have some experience with SystemC."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 3,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 6
} | 0.5 | 0 | 0.5 | 0.5 | 0.5 | 0.166667 | 0.166667 | 0.166667 | 6 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.5 | 1.726661 | 0.773339 |
Near-Data Processing for Machine Learning | In computer architecture, near-data processing (NDP) refers to augmenting the memory or the storage with processing power so that it can process the data stored therein. By offloading the computational burden of CPU and saving the need for transferring raw data in its entirety, NDP exhibits a great potential for acceleration and power reduction. Despite this potential, specific research activities on NDP have witnessed only limited success until recently, often owing to performance mismatches between logic and memory process technologies that put a limit on the processing capability of memory. Recently, there have been two major changes in the game, igniting the resurgence of NDP with renewed interest. The first is the success of machine learning (ML), which often demands a great deal of computation for training, requiring frequent transfers of big data. The second is the advent of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing. Sparked by these application needs and technological support, we evaluate the potential of NDP for ML using a new SSD platform that allows us to simulate in-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) is a full-fledged simulator of a realistic multi-channel SSD that can execute various ML algorithms using the data stored in the SSD. For thorough performance analysis and in-depth comparison with alternatives, we focus on a specific algorithm: stochastic gradient decent (SGD), which is the de facto standard for training differentiable learning machines including deep neural networks. We implement and compare three variants of SGD (synchronous, Downpour, and elastic averaging) using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. In addition, we compare the performance of ISP and that of conventional in-host processing, revealing the advantages of ISP. Based on the advantages and limitations identified through our experiments, we further discuss directions for future research on ISP for accelerating ML. | While the idea of moving the processing for machine learning into silicon contained within the (SSD) data storage devices is intriguing and offers the potential for low-power efficient computation, it is a rather specialized topic, so I don*t feel it will be of especially wide interest to the ICLR audience. The paper describes simulation results, rather than actual hardware implementation, and describes implementations of existing algorithms. The comparisons of algorithms* train/test performance does not seem relevant (since there is no novelty in the algorithms) and the use of a single layer perceptron on MNIST calls into question the practicality of the system, since this is a tiny neural network by today*s standards. I did not understand from the paper how it was thought that this could scale to contemporary scaled networks, in terms of numbers of parameters for both storage and bandwidth. I am not an expert in this area, so have not evaluated in depth. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNear-Data Processing for Machine Learning\nIn computer architecture, near-data processing (NDP) refers to augmenting the memory or the storage with processing power so that it can process the data stored therein. By offloading the computational burden of CPU and saving the need for transferring raw data in its entirety, NDP exhibits a great potential for acceleration and power reduction. Despite this potential, specific research activities on NDP have witnessed only limited success until recently, often owing to performance mismatches between logic and memory process technologies that put a limit on the processing capability of memory. Recently, there have been two major changes in the game, igniting the resurgence of NDP with renewed interest. The first is the success of machine learning (ML), which often demands a great deal of computation for training, requiring frequent transfers of big data. The second is the advent of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing. Sparked by these application needs and technological support, we evaluate the potential of NDP for ML using a new SSD platform that allows us to simulate in-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) is a full-fledged simulator of a realistic multi-channel SSD that can execute various ML algorithms using the data stored in the SSD. For thorough performance analysis and in-depth comparison with alternatives, we focus on a specific algorithm: stochastic gradient decent (SGD), which is the de facto standard for training differentiable learning machines including deep neural networks. We implement and compare three variants of SGD (synchronous, Downpour, and elastic averaging) using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. In addition, we compare the performance of ISP and that of conventional in-host processing, revealing the advantages of ISP. Based on the advantages and limitations identified through our experiments, we further discuss directions for future research on ISP for accelerating ML.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nWhile the idea of moving the processing for machine learning into silicon contained within the (SSD) data storage devices is intriguing and offers the potential for low-power efficient computation, it is a rather specialized topic, so I don*t feel it will be of especially wide interest to the ICLR audience. The paper describes simulation results, rather than actual hardware implementation, and describes implementations of existing algorithms. The comparisons of algorithms* train/test performance does not seem relevant (since there is no novelty in the algorithms) and the use of a single layer perceptron on MNIST calls into question the practicality of the system, since this is a tiny neural network by today*s standards. I did not understand from the paper how it was thought that this could scale to contemporary scaled networks, in terms of numbers of parameters for both storage and bandwidth. I am not an expert in this area, so have not evaluated in depth.",
"role": "assistant"
}
]
| [
"While the idea of moving the processing for machine learning into silicon contained within the (SSD) data storage devices is intriguing and offers the potential for low-power efficient computation, it is a rather specialized topic, so I don*t feel it will be of especially wide interest to the ICLR audience.",
"The paper describes simulation results, rather than actual hardware implementation, and describes implementations of existing algorithms.",
"The comparisons of algorithms* train/test performance does not seem relevant (since there is no novelty in the algorithms) and the use of a single layer perceptron on MNIST calls into question the practicality of the system, since this is a tiny neural network by today*s standards.",
"I did not understand from the paper how it was thought that this could scale to contemporary scaled networks, in terms of numbers of parameters for both storage and bandwidth.",
"I am not an expert in this area, so have not evaluated in depth."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 5
} | 0.6 | 0 | 0.2 | 1 | 0.2 | 0 | 0.2 | 0 | 5 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 2.2 | 1.189924 | 1.010076 |
Exploring the Application of Deep Learning for Supervised Learning Problems | One of the main difficulties in applying deep neural nets (DNNs) to new domains is the need to explore multiple architectures in order to discover ones that perform well. We analyze a large set of DNNs across multiple domains and derive insights regarding their effectiveness. We also analyze the characteristics of various DNNs and the general effect they may have on performance. Finally, we explore the application of meta-learning to the problem of architecture ranking. We demonstrate that by using topological features and modeling the changes in its weights, biases and activation functions layers of the initial training steps, we are able to rank architectures based on their predicted performance. We consider this work to be a first step in the important and challenging direction of exploring the space of different neural network architectures. | The topic is very interesting, but the paper is not convincing. Specifically, the experiment part is weak. The study should include datasets that are familiar to the community as well as the ones *that are not often addressed by deep learning*. The comparison to other approaches is not comprehensive. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExploring the Application of Deep Learning for Supervised Learning Problems\nOne of the main difficulties in applying deep neural nets (DNNs) to new domains is the need to explore multiple architectures in order to discover ones that perform well. We analyze a large set of DNNs across multiple domains and derive insights regarding their effectiveness. We also analyze the characteristics of various DNNs and the general effect they may have on performance. Finally, we explore the application of meta-learning to the problem of architecture ranking. We demonstrate that by using topological features and modeling the changes in its weights, biases and activation functions layers of the initial training steps, we are able to rank architectures based on their predicted performance. We consider this work to be a first step in the important and challenging direction of exploring the space of different neural network architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe topic is very interesting, but the paper is not convincing. Specifically, the experiment part is weak. The study should include datasets that are familiar to the community as well as the ones *that are not often addressed by deep learning*. The comparison to other approaches is not comprehensive.",
"role": "assistant"
}
]
| [
"The topic is very interesting, but the paper is not convincing.",
"Specifically, the experiment part is weak.",
"The study should include datasets that are familiar to the community as well as the ones *that are not often addressed by deep learning*.",
"The comparison to other approaches is not comprehensive."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 4
} | 0.75 | 0 | 0.25 | 0.75 | 0.25 | 0 | 0 | 0.25 | 4 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 2.25 | 0.971623 | 1.278377 |
Exploring the Application of Deep Learning for Supervised Learning Problems | One of the main difficulties in applying deep neural nets (DNNs) to new domains is the need to explore multiple architectures in order to discover ones that perform well. We analyze a large set of DNNs across multiple domains and derive insights regarding their effectiveness. We also analyze the characteristics of various DNNs and the general effect they may have on performance. Finally, we explore the application of meta-learning to the problem of architecture ranking. We demonstrate that by using topological features and modeling the changes in its weights, biases and activation functions layers of the initial training steps, we are able to rank architectures based on their predicted performance. We consider this work to be a first step in the important and challenging direction of exploring the space of different neural network architectures. | This paper presents an intriguing study of how one can pose architecture search as a meta learning problem. By collecting features from networks trained on various datasets and training a “ranking classifier” (the actual details of the classifier do not seem to be described in detail) one can potentially infer what a good architecture for a new problem could be by simply running the ranker on the extracted features for a new problem setup. One notable comment from the paper is that the authors fix some important hyper-parameters for all the networks. I am of the opinion that optimizing the learning rate (and its decay schedule) is actually quite important. I hypothesize that a lot of the conclusions of this paper may change quite a bit if the authors did an actual search over the rates instead. I suspect that instead of training 11k nets, one can train 2k nets with 5 learning rates each and get a much better result that is actually compelling. I am not convinced that the protocol for generating the various architectures is doing a good job at creating a diversity of architecture (simply because of the max depth of 8 layers and 14 components overall). I suspect that most of these generated architectures are actually almost identical performance-wise and that it’s a waste to train so many of them on so many tasks. Unless the authors are already doing this, they should define a pruning mechanism that filters out nets that are too similar to already existing ones. The batch normalization experiments in Table 2 seem odd and under-explained. It is also well-known that the optimal learning rates when using batch norm vs. not using batch norm can differ by an order of magnitude so given the fixed learning rate throughout all experiments, I take these results with some grain of salt. I am not sure we got many insights into the kinds of architectures that ended up being at the top. Either visualizations, or trends (or both), would be great. This work seems to conflate the study of parallel vs. serial architectures with the study of meta learning, which are somewhat distinct issues. I take issue with the table that compares parallel vs. serial performance (table 2) simply because the right way would be to filter the architectures by the same number of parameters / capacity. Ultimately the conclusion seems to be that when applying deep nets in a new domain, it is difficult to come up with a good architecture in advance. In that sense, it is hard to see the paper as a constructive result, because it’s conclusions are that while the ranker may do a good job often-times, it’s not that reliable. Thus I am not convinced that this particular result will be of practical use to folks who are intending to use deep nets for a new domain. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExploring the Application of Deep Learning for Supervised Learning Problems\nOne of the main difficulties in applying deep neural nets (DNNs) to new domains is the need to explore multiple architectures in order to discover ones that perform well. We analyze a large set of DNNs across multiple domains and derive insights regarding their effectiveness. We also analyze the characteristics of various DNNs and the general effect they may have on performance. Finally, we explore the application of meta-learning to the problem of architecture ranking. We demonstrate that by using topological features and modeling the changes in its weights, biases and activation functions layers of the initial training steps, we are able to rank architectures based on their predicted performance. We consider this work to be a first step in the important and challenging direction of exploring the space of different neural network architectures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents an intriguing study of how one can pose architecture search as a meta learning problem. By collecting features from networks trained on various datasets and training a “ranking classifier” (the actual details of the classifier do not seem to be described in detail) one can potentially infer what a good architecture for a new problem could be by simply running the ranker on the extracted features for a new problem setup. One notable comment from the paper is that the authors fix some important hyper-parameters for all the networks. I am of the opinion that optimizing the learning rate (and its decay schedule) is actually quite important. I hypothesize that a lot of the conclusions of this paper may change quite a bit if the authors did an actual search over the rates instead. I suspect that instead of training 11k nets, one can train 2k nets with 5 learning rates each and get a much better result that is actually compelling. I am not convinced that the protocol for generating the various architectures is doing a good job at creating a diversity of architecture (simply because of the max depth of 8 layers and 14 components overall). I suspect that most of these generated architectures are actually almost identical performance-wise and that it’s a waste to train so many of them on so many tasks. Unless the authors are already doing this, they should define a pruning mechanism that filters out nets that are too similar to already existing ones. The batch normalization experiments in Table 2 seem odd and under-explained. It is also well-known that the optimal learning rates when using batch norm vs. not using batch norm can differ by an order of magnitude so given the fixed learning rate throughout all experiments, I take these results with some grain of salt. I am not sure we got many insights into the kinds of architectures that ended up being at the top. Either visualizations, or trends (or both), would be great. This work seems to conflate the study of parallel vs. serial architectures with the study of meta learning, which are somewhat distinct issues. I take issue with the table that compares parallel vs. serial performance (table 2) simply because the right way would be to filter the architectures by the same number of parameters / capacity. Ultimately the conclusion seems to be that when applying deep nets in a new domain, it is difficult to come up with a good architecture in advance. In that sense, it is hard to see the paper as a constructive result, because it’s conclusions are that while the ranker may do a good job often-times, it’s not that reliable. Thus I am not convinced that this particular result will be of practical use to folks who are intending to use deep nets for a new domain.",
"role": "assistant"
}
]
| [
"This paper presents an intriguing study of how one can pose architecture search as a meta learning problem.",
"By collecting features from networks trained on various datasets and training a “ranking classifier” (the actual details of the classifier do not seem to be described in detail) one can potentially infer what a good architecture for a new problem could be by simply running the ranker on the extracted features for a new problem setup.",
"One notable comment from the paper is that the authors fix some important hyper-parameters for all the networks.",
"I am of the opinion that optimizing the learning rate (and its decay schedule) is actually quite important.",
"I hypothesize that a lot of the conclusions of this paper may change quite a bit if the authors did an actual search over the rates instead.",
"I suspect that instead of training 11k nets, one can train 2k nets with 5 learning rates each and get a much better result that is actually compelling.",
"I am not convinced that the protocol for generating the various architectures is doing a good job at creating a diversity of architecture (simply because of the max depth of 8 layers and 14 components overall).",
"I suspect that most of these generated architectures are actually almost identical performance-wise and that it’s a waste to train so many of them on so many tasks.",
"Unless the authors are already doing this, they should define a pruning mechanism that filters out nets that are too similar to already existing ones.",
"The batch normalization experiments in Table 2 seem odd and under-explained.",
"It is also well-known that the optimal learning rates when using batch norm vs. not using batch norm can differ by an order of magnitude so given the fixed learning rate throughout all experiments, I take these results with some grain of salt.",
"I am not sure we got many insights into the kinds of architectures that ended up being at the top.",
"Either visualizations, or trends (or both), would be great.",
"This work seems to conflate the study of parallel vs. serial architectures with the study of meta learning, which are somewhat distinct issues.",
"I take issue with the table that compares parallel vs. serial performance (table 2) simply because the right way would be to filter the architectures by the same number of parameters / capacity.",
"Ultimately the conclusion seems to be that when applying deep nets in a new domain, it is difficult to come up with a good architecture in advance.",
"In that sense, it is hard to see the paper as a constructive result, because it’s conclusions are that while the ranker may do a good job often-times, it’s not that reliable.",
"Thus I am not convinced that this particular result will be of practical use to folks who are intending to use deep nets for a new domain."
] | {
"criticism": 7,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 14,
"praise": 4,
"presentation_and_reporting": 2,
"results_and_discussion": 7,
"suggestion_and_solution": 3,
"total": 18
} | 0.388889 | 0 | 0.277778 | 0.777778 | 0.222222 | 0.111111 | 0.388889 | 0.166667 | 18 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 1.938773 | 0.394561 |
An Empirical Analysis of Deep Network Loss Surfaces | The training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find minima. | I appreciate the work but I do not think the paper is clear enough. Moreover, the authors say *local minimia* ~70 times but do not show (except for Figure 11?) that the solutions found are not necessarily local minima. The authors do not talk about that fact that slices of a non-convex problem can look like the ones they show. It is well-known that the first-order methods may just fail to deal with certain non-convex ill-conditioned problems even in low-dimensional noiseless cases, the place/solution where they fail to make progress is not necessarily a local minimum. Some sentences like the one given below suggest that the study is too superficial: *One of the interesting empirical observation is that we often observe is that the incremental improvement of optimization methods decreases rapidly even in non-convex problems.* | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn Empirical Analysis of Deep Network Loss Surfaces\nThe training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find minima.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI appreciate the work but I do not think the paper is clear enough. Moreover, the authors say *local minimia* ~70 times but do not show (except for Figure 11?) that the solutions found are not necessarily local minima. The authors do not talk about that fact that slices of a non-convex problem can look like the ones they show. It is well-known that the first-order methods may just fail to deal with certain non-convex ill-conditioned problems even in low-dimensional noiseless cases, the place/solution where they fail to make progress is not necessarily a local minimum. Some sentences like the one given below suggest that the study is too superficial: *One of the interesting empirical observation is that we often observe is that the incremental improvement of optimization methods decreases rapidly even in non-convex problems.*",
"role": "assistant"
}
]
| [
"I appreciate the work but I do not think the paper is clear enough.",
"Moreover, the authors say *local minimia* ~70 times but do not show (except for Figure 11?)",
"that the solutions found are not necessarily local minima.",
"The authors do not talk about that fact that slices of a non-convex problem can look like the ones they show.",
"It is well-known that the first-order methods may just fail to deal with certain non-convex ill-conditioned problems even in low-dimensional noiseless cases, the place/solution where they fail to make progress is not necessarily a local minimum.",
"Some sentences like the one given below suggest that the study is too superficial: *One of the interesting empirical observation is that we often observe is that the incremental improvement of optimization methods decreases rapidly even in non-convex problems.",
"*"
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 0,
"total": 7
} | 0.571429 | 0.142857 | 0.142857 | 0.285714 | 0.285714 | 0.142857 | 0.571429 | 0 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.142857 | 1.57469 | 0.568167 |
An Empirical Analysis of Deep Network Loss Surfaces | The training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find minima. | This paper provides an extensive analysis of the error loss function for different optimization methods. The presentation is well done and informative. The experimental procedure is clarified sufficiently well. Theoretical evaluations like this are crucial for a wide range of applications and help to better understand and improve the convergence behavior for a given system. Pros: - Important analysis - Good visualizations Cons: - The paper describes mostly the observation that the optima vary for different methods, however doesn*t attempt to explain why it happens and how to solve it (aside from batch-norm) - Some fonts are very small (e.g. Fig. 5) | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn Empirical Analysis of Deep Network Loss Surfaces\nThe training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find minima.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper provides an extensive analysis of the error loss function for different optimization methods. The presentation is well done and informative. The experimental procedure is clarified sufficiently well. Theoretical evaluations like this are crucial for a wide range of applications and help to better understand and improve the convergence behavior for a given system. Pros: - Important analysis - Good visualizations Cons: - The paper describes mostly the observation that the optima vary for different methods, however doesn*t attempt to explain why it happens and how to solve it (aside from batch-norm) - Some fonts are very small (e.g. Fig. 5)",
"role": "assistant"
}
]
| [
"This paper provides an extensive analysis of the error loss function for different optimization methods.",
"The presentation is well done and informative.",
"The experimental procedure is clarified sufficiently well.",
"Theoretical evaluations like this are crucial for a wide range of applications and help to better understand and improve the convergence behavior for a given system.",
"Pros: - Important analysis - Good visualizations Cons: - The paper describes mostly the observation that the optima vary for different methods, however doesn*t attempt to explain why it happens and how to solve it (aside from batch-norm) - Some fonts are very small (e.g.",
"Fig.",
"5)"
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 4,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 7
} | 0.142857 | 0.142857 | 0.142857 | 0.571429 | 0.571429 | 0.428571 | 0.285714 | 0 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.285714 | 1.717547 | 0.568167 |
On orthogonality and learning recurrent networks with long term dependencies | It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. | The paper is well-motivated, and is part of a line of recent work investigating the use of orthogonal weight matrices within recurrent neural networks. While using orthogonal weights addresses the issue of vanishing/exploding gradients, it is unclear whether anything is lost, either in representational power or in trainability, by enforcing orthogonality. As such, an empirical investigation that examines how these properties are affected by deviation from orthogonality is a useful contribution. The paper is clearly written, and the primary formulation for investigating soft orthogonality constraints (representing the weight matrices in their SVD factorized form, which gives explicit control over the singular values) is clean and natural, albeit not necessarily ideal from a practical computational standpoint (as it requires maintaining multiple orthogonal weight matrices each requiring an expensive update step). I am unaware of this approach being investigated previously. The experimental side, however, is somewhat lacking. The paper evaluates two tasks: a copy task, using an RNN architecture without transition non-linearities, and sequential/permuted sequential MNIST. These are reasonable choices for an initial evaluation, but are both toy problems and don*t shed much light on the practical aspects of the proposed approaches. An evaluation in a more realistic setting would be valuable (e.g., a language modeling task). Furthermore, while investigating pure RNN*s makes sense for evaluating effects of orthogonality, it feels somewhat academic: LSTMs also provide a mechanism to capture longer-term dependencies, and in the tasks where the proposed approach was compared directly to an LSTM, it was significantly outperformed. It would be very interesting to see the effects of the proposed soft orthogonality constraint in additional architectures (e.g., deep feed-forward architectures, or whether there*s any benefit when embedded within an LSTM, although this seems doubtful). Overall, the paper addresses a clear-cut question with a well-motivated approach, and has interesting findings on some toy datasets. As such I think it could provide a valuable contribution. However, the significance of the work is restricted by the limited experimental settings (both datasets and network architectures). | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn orthogonality and learning recurrent networks with long term dependencies\nIt is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper is well-motivated, and is part of a line of recent work investigating the use of orthogonal weight matrices within recurrent neural networks. While using orthogonal weights addresses the issue of vanishing/exploding gradients, it is unclear whether anything is lost, either in representational power or in trainability, by enforcing orthogonality. As such, an empirical investigation that examines how these properties are affected by deviation from orthogonality is a useful contribution. The paper is clearly written, and the primary formulation for investigating soft orthogonality constraints (representing the weight matrices in their SVD factorized form, which gives explicit control over the singular values) is clean and natural, albeit not necessarily ideal from a practical computational standpoint (as it requires maintaining multiple orthogonal weight matrices each requiring an expensive update step). I am unaware of this approach being investigated previously. The experimental side, however, is somewhat lacking. The paper evaluates two tasks: a copy task, using an RNN architecture without transition non-linearities, and sequential/permuted sequential MNIST. These are reasonable choices for an initial evaluation, but are both toy problems and don*t shed much light on the practical aspects of the proposed approaches. An evaluation in a more realistic setting would be valuable (e.g., a language modeling task). Furthermore, while investigating pure RNN*s makes sense for evaluating effects of orthogonality, it feels somewhat academic: LSTMs also provide a mechanism to capture longer-term dependencies, and in the tasks where the proposed approach was compared directly to an LSTM, it was significantly outperformed. It would be very interesting to see the effects of the proposed soft orthogonality constraint in additional architectures (e.g., deep feed-forward architectures, or whether there*s any benefit when embedded within an LSTM, although this seems doubtful). Overall, the paper addresses a clear-cut question with a well-motivated approach, and has interesting findings on some toy datasets. As such I think it could provide a valuable contribution. However, the significance of the work is restricted by the limited experimental settings (both datasets and network architectures).",
"role": "assistant"
}
]
| [
"The paper is well-motivated, and is part of a line of recent work investigating the use of orthogonal weight matrices within recurrent neural networks.",
"While using orthogonal weights addresses the issue of vanishing/exploding gradients, it is unclear whether anything is lost, either in representational power or in trainability, by enforcing orthogonality.",
"As such, an empirical investigation that examines how these properties are affected by deviation from orthogonality is a useful contribution.",
"The paper is clearly written, and the primary formulation for investigating soft orthogonality constraints (representing the weight matrices in their SVD factorized form, which gives explicit control over the singular values) is clean and natural, albeit not necessarily ideal from a practical computational standpoint (as it requires maintaining multiple orthogonal weight matrices each requiring an expensive update step).",
"I am unaware of this approach being investigated previously.",
"The experimental side, however, is somewhat lacking.",
"The paper evaluates two tasks: a copy task, using an RNN architecture without transition non-linearities, and sequential/permuted sequential MNIST.",
"These are reasonable choices for an initial evaluation, but are both toy problems and don*t shed much light on the practical aspects of the proposed approaches.",
"An evaluation in a more realistic setting would be valuable (e.g., a language modeling task).",
"Furthermore, while investigating pure RNN*s makes sense for evaluating effects of orthogonality, it feels somewhat academic: LSTMs also provide a mechanism to capture longer-term dependencies, and in the tasks where the proposed approach was compared directly to an LSTM, it was significantly outperformed.",
"It would be very interesting to see the effects of the proposed soft orthogonality constraint in additional architectures (e.g., deep feed-forward architectures, or whether there*s any benefit when embedded within an LSTM, although this seems doubtful).",
"Overall, the paper addresses a clear-cut question with a well-motivated approach, and has interesting findings on some toy datasets.",
"As such I think it could provide a valuable contribution.",
"However, the significance of the work is restricted by the limited experimental settings (both datasets and network architectures)."
] | {
"criticism": 6,
"example": 0,
"importance_and_relevance": 6,
"materials_and_methods": 12,
"praise": 6,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 2,
"total": 14
} | 0.428571 | 0 | 0.428571 | 0.857143 | 0.428571 | 0.142857 | 0.285714 | 0.142857 | 14 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.714286 | 2.698503 | 0.015782 |
On orthogonality and learning recurrent networks with long term dependencies | It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. | Vanishing and exploding gradients makes the optimization of RNNs very challenging. The issue becomes worse on tasks with long term dependencies that requires longer RNNs. One of the suggested approaches to improve the optimization is to optimize in a way that the transfer matrix is almost orthogonal. This paper investigate the role of orthogonality on the optimization and learning which is very important. The writing is sound and clear and arguments are easy to follow. The suggested optimization method is very interesting. The main shortcoming of this paper is the experiments which I find very important and I hope authors can update the experiment section significantly. Below I mention some comments on the experiment section: 1- I think the experiments are not enough. At the very least, report the result on the adding problem and language modeling task on Penn Treebank. 2- I understand that the copying task becomes difficult with non-lineary. However, removing non-linearity makes the optimization very different and therefore, it is very hard to conclude anything from the results on the copying task. 3- I was not able to find the number of hidden units used for RNNs in different tasks. 4- Please report the running time of your method in the paper for different numbers of hidden units, compare it with the SGD and mention the NN package you have used. 5- The results on Table 1 and Table 2 might also suggest that the orthogonality is not really helpful since even without a margin, the numbers are very close compare to the case when you find the optimal margin. Am I right? 6- What do we learn from Figure 2? It is left without any discussion. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn orthogonality and learning recurrent networks with long term dependencies\nIt is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nVanishing and exploding gradients makes the optimization of RNNs very challenging. The issue becomes worse on tasks with long term dependencies that requires longer RNNs. One of the suggested approaches to improve the optimization is to optimize in a way that the transfer matrix is almost orthogonal. This paper investigate the role of orthogonality on the optimization and learning which is very important. The writing is sound and clear and arguments are easy to follow. The suggested optimization method is very interesting. The main shortcoming of this paper is the experiments which I find very important and I hope authors can update the experiment section significantly. Below I mention some comments on the experiment section: 1- I think the experiments are not enough. At the very least, report the result on the adding problem and language modeling task on Penn Treebank. 2- I understand that the copying task becomes difficult with non-lineary. However, removing non-linearity makes the optimization very different and therefore, it is very hard to conclude anything from the results on the copying task. 3- I was not able to find the number of hidden units used for RNNs in different tasks. 4- Please report the running time of your method in the paper for different numbers of hidden units, compare it with the SGD and mention the NN package you have used. 5- The results on Table 1 and Table 2 might also suggest that the orthogonality is not really helpful since even without a margin, the numbers are very close compare to the case when you find the optimal margin. Am I right? 6- What do we learn from Figure 2? It is left without any discussion.",
"role": "assistant"
}
]
| [
"Vanishing and exploding gradients makes the optimization of RNNs very challenging.",
"The issue becomes worse on tasks with long term dependencies that requires longer RNNs.",
"One of the suggested approaches to improve the optimization is to optimize in a way that the transfer matrix is almost orthogonal.",
"This paper investigate the role of orthogonality on the optimization and learning which is very important.",
"The writing is sound and clear and arguments are easy to follow.",
"The suggested optimization method is very interesting.",
"The main shortcoming of this paper is the experiments which I find very important and I hope authors can update the experiment section significantly.",
"Below I mention some comments on the experiment section: 1- I think the experiments are not enough.",
"At the very least, report the result on the adding problem and language modeling task on Penn Treebank.",
"2- I understand that the copying task becomes difficult with non-lineary.",
"However, removing non-linearity makes the optimization very different and therefore, it is very hard to conclude anything from the results on the copying task.",
"3- I was not able to find the number of hidden units used for RNNs in different tasks.",
"4- Please report the running time of your method in the paper for different numbers of hidden units, compare it with the SGD and mention the NN package you have used.",
"5- The results on Table 1 and Table 2 might also suggest that the orthogonality is not really helpful since even without a margin, the numbers are very close compare to the case when you find the optimal margin.",
"Am I right?",
"6- What do we learn from Figure 2?",
"It is left without any discussion."
] | {
"criticism": 6,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 4,
"results_and_discussion": 6,
"suggestion_and_solution": 3,
"total": 17
} | 0.352941 | 0.058824 | 0.176471 | 0.647059 | 0.176471 | 0.235294 | 0.352941 | 0.176471 | 17 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.176471 | 1.923952 | 0.252519 |
Generative Adversarial Parallelization | Generative Adversarial Networks (GAN) have become one of the most studied frameworks for unsupervised learning due to their intuitive formulation. They have also been shown to be capable of generating convincing examples in limited domains, such as low-resolution images. However, they still prove difficult to train in practice and tend to ignore modes of the data generating distribution. Quantitatively capturing effects such as mode coverage and more generally the quality of the generative model still remain elusive. We propose Generative Adversarial Parallelization (GAP), a framework in which many GANs or their variants are trained simultaneously, exchanging their discriminators. This eliminates the tight coupling between a generator and discriminator, leading to improved convergence and improved coverage of modes. We also propose an improved variant of the recently proposed Generative Adversarial Metric and show how it can score individual GANs or their collections under the GAP model. | This paper proposes an extension of the GAN framework known as GAP whereby multiple generators and discriminators are trained in parallel. The generator/discriminator pairing is shuffled according to a periodic schedule. Pros: + The proposed approach is simple and easy to replicate. Cons: - The paper is confusing to read. - The results are suggestive but do not conclusively show a performance win for GAP. The main argument of the paper is that GAP leads to improved convergence and improved coverage of modes. The coverage visualizations are suggestive but there still is not enough evidence to conclude that GAP is in fact improving coverage. And for convergence it is difficult to assess the effect of GAP on the basis of learning curves. The proposed GAM-II metric is circular in that model performance depends on the collection of baselines the model is being compared with. Estimating likelihood via AIS seems to be a promising way to evaluate, as does using the Inception score. Perhaps a more systematic way to determine GAP*s effect would be to set up a grid search of hyperparameters and train an equal number of GANs and GAP-GANs for each setting. Then a histogram over final Inception scores or likelihood estimates of the trained models would help to show whether GAP tended to produce better models. Overall the approach seems promising but there are too many open questions regarding the paper in its current form. * Section 2: *Remark that when...* => seems like a to-do. * Section A.1: The proposed metric is not described in adequate detail. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGenerative Adversarial Parallelization\nGenerative Adversarial Networks (GAN) have become one of the most studied frameworks for unsupervised learning due to their intuitive formulation. They have also been shown to be capable of generating convincing examples in limited domains, such as low-resolution images. However, they still prove difficult to train in practice and tend to ignore modes of the data generating distribution. Quantitatively capturing effects such as mode coverage and more generally the quality of the generative model still remain elusive. We propose Generative Adversarial Parallelization (GAP), a framework in which many GANs or their variants are trained simultaneously, exchanging their discriminators. This eliminates the tight coupling between a generator and discriminator, leading to improved convergence and improved coverage of modes. We also propose an improved variant of the recently proposed Generative Adversarial Metric and show how it can score individual GANs or their collections under the GAP model.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes an extension of the GAN framework known as GAP whereby multiple generators and discriminators are trained in parallel. The generator/discriminator pairing is shuffled according to a periodic schedule. Pros: + The proposed approach is simple and easy to replicate. Cons: - The paper is confusing to read. - The results are suggestive but do not conclusively show a performance win for GAP. The main argument of the paper is that GAP leads to improved convergence and improved coverage of modes. The coverage visualizations are suggestive but there still is not enough evidence to conclude that GAP is in fact improving coverage. And for convergence it is difficult to assess the effect of GAP on the basis of learning curves. The proposed GAM-II metric is circular in that model performance depends on the collection of baselines the model is being compared with. Estimating likelihood via AIS seems to be a promising way to evaluate, as does using the Inception score. Perhaps a more systematic way to determine GAP*s effect would be to set up a grid search of hyperparameters and train an equal number of GANs and GAP-GANs for each setting. Then a histogram over final Inception scores or likelihood estimates of the trained models would help to show whether GAP tended to produce better models. Overall the approach seems promising but there are too many open questions regarding the paper in its current form. * Section 2: *Remark that when...* => seems like a to-do. * Section A.1: The proposed metric is not described in adequate detail.",
"role": "assistant"
}
]
| [
"This paper proposes an extension of the GAN framework known as GAP whereby multiple generators and discriminators are trained in parallel.",
"The generator/discriminator pairing is shuffled according to a periodic schedule.",
"Pros: + The proposed approach is simple and easy to replicate.",
"Cons: - The paper is confusing to read.",
"- The results are suggestive but do not conclusively show a performance win for GAP.",
"The main argument of the paper is that GAP leads to improved convergence and improved coverage of modes.",
"The coverage visualizations are suggestive but there still is not enough evidence to conclude that GAP is in fact improving coverage.",
"And for convergence it is difficult to assess the effect of GAP on the basis of learning curves.",
"The proposed GAM-II metric is circular in that model performance depends on the collection of baselines the model is being compared with.",
"Estimating likelihood via AIS seems to be a promising way to evaluate, as does using the Inception score.",
"Perhaps a more systematic way to determine GAP*s effect would be to set up a grid search of hyperparameters and train an equal number of GANs and GAP-GANs for each setting.",
"Then a histogram over final Inception scores or likelihood estimates of the trained models would help to show whether GAP tended to produce better models.",
"Overall the approach seems promising but there are too many open questions regarding the paper in its current form.",
"* Section 2: *Remark that when...* => seems like a to-do.",
"* Section A.1: The proposed metric is not described in adequate detail."
] | {
"criticism": 6,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 6,
"suggestion_and_solution": 3,
"total": 15
} | 0.4 | 0.066667 | 0.133333 | 0.6 | 0.2 | 0.2 | 0.4 | 0.2 | 15 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 2.13687 | 0.06313 |
Higher Order Recurrent Neural Networks | In this paper, we study novel neural network structures to better model long term dependency in sequential data. We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs. | I think the backbone of the paper is interesting and could lead to something potentially quite useful. I like the idea of connecting signal processing with recurrent network and then using tools from one setting in the other. However, while the work has nuggets of very interesting observations, I feel they can be put together in a better way. I think the writeup and everything can be improved and I urge the authors to strive for this if the paper doesn*t go through. I think some of the ideas of how to connect to the past are interesting, it would be nice to have more experiments or to try to understand better why this connections help and how. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHigher Order Recurrent Neural Networks\nIn this paper, we study novel neural network structures to better model long term dependency in sequential data. We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI think the backbone of the paper is interesting and could lead to something potentially quite useful. I like the idea of connecting signal processing with recurrent network and then using tools from one setting in the other. However, while the work has nuggets of very interesting observations, I feel they can be put together in a better way. I think the writeup and everything can be improved and I urge the authors to strive for this if the paper doesn*t go through. I think some of the ideas of how to connect to the past are interesting, it would be nice to have more experiments or to try to understand better why this connections help and how.",
"role": "assistant"
}
]
| [
"I think the backbone of the paper is interesting and could lead to something potentially quite useful.",
"I like the idea of connecting signal processing with recurrent network and then using tools from one setting in the other.",
"However, while the work has nuggets of very interesting observations, I feel they can be put together in a better way.",
"I think the writeup and everything can be improved and I urge the authors to strive for this if the paper doesn*t go through.",
"I think some of the ideas of how to connect to the past are interesting, it would be nice to have more experiments or to try to understand better why this connections help and how."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 2,
"praise": 3,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 3,
"total": 5
} | 0 | 0 | 0.6 | 0.4 | 0.6 | 0 | 0 | 0.6 | 5 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 2.2 | 1.189924 | 1.010076 |
Introducing Active Learning for CNN under the light of Variational Inference | One main concern of the deep learning community is to increase the capacity of representation of deep networks by increasing their depth. This requires to scale up the size of the training database accordingly. Indeed a major intuition lies in the fact that the depth of the network and the size of the training set are strongly correlated. However recent works tend to show that deep learning may be handled with smaller dataset as long as the training samples are carefully selected (let us mention for instance curriculum learning). In this context we introduce a scalable and efficient active learning method that can be applied to most neural networks, especially Convolutional Neural Networks (CNN). To the best of our knowledge, this paper is the first of its kind to design an active learning selection scheme based on a variational inference for neural networks. We also deduced a formulation of the posterior and prior distributions of the weights using statistical knowledge on the Maximum Likelihood Estimator. We describe our strategy to come up with our active learning criterion. We assess its consistency by checking the accuracy obtained by successive active learning steps on two benchmark datasets MNIST and USPS. We also demonstrate its scalability towards increasing training set size. | The paper proposes to perform active learning using pool selection of deep learning mini-batches using an approximation of the bayesian posterior. Several terms are in turn approximated. The Maximum Likelihood Estimation (MLE) bayesian inference approach to active learning, the various approximations, and more generally the theoretical framework is very interesting but difficult to follow. The paper is written in poor English and is sometimes a bit painful to read. Alternative Active learning strategies and techniques do not need to be described with such detail. On the other hand, the proposed approach has a lot of complex approximations which would benefit from a more detailed/structured presentation. Another dataset would be a big plus (both datasets concern gray digits and USPS and are arguably somewhat similar). | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIntroducing Active Learning for CNN under the light of Variational Inference\nOne main concern of the deep learning community is to increase the capacity of representation of deep networks by increasing their depth. This requires to scale up the size of the training database accordingly. Indeed a major intuition lies in the fact that the depth of the network and the size of the training set are strongly correlated. However recent works tend to show that deep learning may be handled with smaller dataset as long as the training samples are carefully selected (let us mention for instance curriculum learning). In this context we introduce a scalable and efficient active learning method that can be applied to most neural networks, especially Convolutional Neural Networks (CNN). To the best of our knowledge, this paper is the first of its kind to design an active learning selection scheme based on a variational inference for neural networks. We also deduced a formulation of the posterior and prior distributions of the weights using statistical knowledge on the Maximum Likelihood Estimator. We describe our strategy to come up with our active learning criterion. We assess its consistency by checking the accuracy obtained by successive active learning steps on two benchmark datasets MNIST and USPS. We also demonstrate its scalability towards increasing training set size.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes to perform active learning using pool selection of deep learning mini-batches using an approximation of the bayesian posterior. Several terms are in turn approximated. The Maximum Likelihood Estimation (MLE) bayesian inference approach to active learning, the various approximations, and more generally the theoretical framework is very interesting but difficult to follow. The paper is written in poor English and is sometimes a bit painful to read. Alternative Active learning strategies and techniques do not need to be described with such detail. On the other hand, the proposed approach has a lot of complex approximations which would benefit from a more detailed/structured presentation. Another dataset would be a big plus (both datasets concern gray digits and USPS and are arguably somewhat similar).",
"role": "assistant"
}
]
| [
"The paper proposes to perform active learning using pool selection of deep learning mini-batches using an approximation of the bayesian posterior.",
"Several terms are in turn approximated.",
"The Maximum Likelihood Estimation (MLE) bayesian inference approach to active learning, the various approximations, and more generally the theoretical framework is very interesting but difficult to follow.",
"The paper is written in poor English and is sometimes a bit painful to read.",
"Alternative Active learning strategies and techniques do not need to be described with such detail.",
"On the other hand, the proposed approach has a lot of complex approximations which would benefit from a more detailed/structured presentation.",
"Another dataset would be a big plus (both datasets concern gray digits and USPS and are arguably somewhat similar)."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 5,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 7
} | 0.285714 | 0 | 0.142857 | 0.714286 | 0.142857 | 0.714286 | 0.285714 | 0.428571 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.714286 | 2.146118 | 0.568167 |
An Information Retrieval Approach for Finding Dependent Subspaces of Multiple Views | Finding relationships between multiple views of data is essential both in exploratory analysis and as pre-processing for predictive tasks. A prominent approach is to apply variants of Canonical Correlation Analysis (CCA), a classical method seeking correlated components between views. The basic CCA is restricted to maximizing a simple dependency criterion, correlation, measured directly between data coordinates. We introduce a new method that finds dependent subspaces of views directly optimized for the data analysis task of neighbor retrieval between multiple views. We optimize mappings for each view such as linear transformations to maximize cross-view similarity between neighborhoods of data samples. The criterion arises directly from the well-defined retrieval task, detects nonlinear and local similarities, measures dependency of data relationships rather than only individual data coordinates, and is related to well understood measures of information retrieval quality. In experiments the proposed method outperforms alternatives in preserving cross-view neighborhood similarities, and yields insights into local dependencies between multiple views. | This paper proposes a multiview learning approach to finding dependent subspaces optimized for maximizing cross-view similarity between neighborhoods of data samples. The motivation comes from information retrieval tasks. Authors position their work as an alternative to CCA-based multiview learning; note, however, that CCA based techniques have very different purpose and are rather broadly applicable than the setting considered here. Main points: - I am not sure what authors mean by time complexity. It would appear that they simply report the computational cost of evaluating the objective in equation (7). Is there a sense of how many iterations of the L-BFGS method? Since that is going to be difficult given the nature of the optimization problem, one would appreciate some sense of how hard or easy it is in practice to optimize the objective in (7) and how that varies with various problem dimensions. Authors argue that scalability is not their first concern, which is understandable, but if they are going to make some remarks about the computational cost, it better be clarified that the reported cost is for some small part of their overall approach rather than “time complexity”. - Since authors position their approach as an alternative to CCA, they should remark about how CCA, even though a nonconvex optimization problem, can be solved exactly with computational cost that is linear in the data size and only quadratic with dimensionality even with a naive implementation. The method proposed in the paper does not seem to be tractable, at least not immediately. - The empirical results with synthetic data are a it confusing. First of all the data generation procedure is quite convoluted, I am not sure why we need to process each coordinate separately in different groups, and then permute and combine etc. A simple benchmark where we take different linear transformations of a shared representation and add independent noise would suffice to confirm that the proposed method does something reasonable. I am also baffled why CCA does not recover the true subspace - arguably it is the level of additive noise that would impact the recoverability - however the proposed method is nearly exact so the noise level is perhaps not so severe. It is also not clear if authors are using regularization with CCA - without regularization CCA can be have in a funny manner. This needs to be clarified. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn Information Retrieval Approach for Finding Dependent Subspaces of Multiple Views\nFinding relationships between multiple views of data is essential both in exploratory analysis and as pre-processing for predictive tasks. A prominent approach is to apply variants of Canonical Correlation Analysis (CCA), a classical method seeking correlated components between views. The basic CCA is restricted to maximizing a simple dependency criterion, correlation, measured directly between data coordinates. We introduce a new method that finds dependent subspaces of views directly optimized for the data analysis task of neighbor retrieval between multiple views. We optimize mappings for each view such as linear transformations to maximize cross-view similarity between neighborhoods of data samples. The criterion arises directly from the well-defined retrieval task, detects nonlinear and local similarities, measures dependency of data relationships rather than only individual data coordinates, and is related to well understood measures of information retrieval quality. In experiments the proposed method outperforms alternatives in preserving cross-view neighborhood similarities, and yields insights into local dependencies between multiple views.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a multiview learning approach to finding dependent subspaces optimized for maximizing cross-view similarity between neighborhoods of data samples. The motivation comes from information retrieval tasks. Authors position their work as an alternative to CCA-based multiview learning; note, however, that CCA based techniques have very different purpose and are rather broadly applicable than the setting considered here. Main points: - I am not sure what authors mean by time complexity. It would appear that they simply report the computational cost of evaluating the objective in equation (7). Is there a sense of how many iterations of the L-BFGS method? Since that is going to be difficult given the nature of the optimization problem, one would appreciate some sense of how hard or easy it is in practice to optimize the objective in (7) and how that varies with various problem dimensions. Authors argue that scalability is not their first concern, which is understandable, but if they are going to make some remarks about the computational cost, it better be clarified that the reported cost is for some small part of their overall approach rather than “time complexity”. - Since authors position their approach as an alternative to CCA, they should remark about how CCA, even though a nonconvex optimization problem, can be solved exactly with computational cost that is linear in the data size and only quadratic with dimensionality even with a naive implementation. The method proposed in the paper does not seem to be tractable, at least not immediately. - The empirical results with synthetic data are a it confusing. First of all the data generation procedure is quite convoluted, I am not sure why we need to process each coordinate separately in different groups, and then permute and combine etc. A simple benchmark where we take different linear transformations of a shared representation and add independent noise would suffice to confirm that the proposed method does something reasonable. I am also baffled why CCA does not recover the true subspace - arguably it is the level of additive noise that would impact the recoverability - however the proposed method is nearly exact so the noise level is perhaps not so severe. It is also not clear if authors are using regularization with CCA - without regularization CCA can be have in a funny manner. This needs to be clarified.",
"role": "assistant"
}
]
| [
"This paper proposes a multiview learning approach to finding dependent subspaces optimized for maximizing cross-view similarity between neighborhoods of data samples.",
"The motivation comes from information retrieval tasks.",
"Authors position their work as an alternative to CCA-based multiview learning; note, however, that CCA based techniques have very different purpose and are rather broadly applicable than the setting considered here.",
"Main points: - I am not sure what authors mean by time complexity.",
"It would appear that they simply report the computational cost of evaluating the objective in equation (7).",
"Is there a sense of how many iterations of the L-BFGS method?",
"Since that is going to be difficult given the nature of the optimization problem, one would appreciate some sense of how hard or easy it is in practice to optimize the objective in (7) and how that varies with various problem dimensions.",
"Authors argue that scalability is not their first concern, which is understandable, but if they are going to make some remarks about the computational cost, it better be clarified that the reported cost is for some small part of their overall approach rather than “time complexity”.",
"- Since authors position their approach as an alternative to CCA, they should remark about how CCA, even though a nonconvex optimization problem, can be solved exactly with computational cost that is linear in the data size and only quadratic with dimensionality even with a naive implementation.",
"The method proposed in the paper does not seem to be tractable, at least not immediately.",
"- The empirical results with synthetic data are a it confusing.",
"First of all the data generation procedure is quite convoluted, I am not sure why we need to process each coordinate separately in different groups, and then permute and combine etc.",
"A simple benchmark where we take different linear transformations of a shared representation and add independent noise would suffice to confirm that the proposed method does something reasonable.",
"I am also baffled why CCA does not recover the true subspace - arguably it is the level of additive noise that would impact the recoverability - however the proposed method is nearly exact so the noise level is perhaps not so severe.",
"It is also not clear if authors are using regularization with CCA - without regularization CCA can be have in a funny manner.",
"This needs to be clarified."
] | {
"criticism": 7,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 13,
"praise": 0,
"presentation_and_reporting": 4,
"results_and_discussion": 7,
"suggestion_and_solution": 6,
"total": 16
} | 0.4375 | 0 | 0.125 | 0.8125 | 0 | 0.25 | 0.4375 | 0.375 | 16 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 2.4375 | 2.295458 | 0.142042 |
CAN AI GENERATE LOVE ADVICE?: TOWARD NEURAL ANSWER GENERATION FOR NON-FACTOID QUESTIONS | Deep learning methods that extract answers for non-factoid questions from QA sites are seen as critical since they can assist users in reaching their next decisions through conversations with AI systems. The current methods, however, have the following two problems: (1) They can not understand the ambiguous use of words in the questions as word usage can strongly depend on the context (e.g. the word “relationship” has quite different meanings in the categories of Love advice and other categories). As a result, the accuracies of their answer selections are not good enough. (2) The current methods can only select from among the answers held by QA sites and can not generate new ones. Thus, they can not answer the questions that are somewhat different with those stored in QA sites. Our solution, Neural Answer Construction Model, tackles these problems as it: (1) Incorporates the biases of semantics behind questions (e.g. categories assigned to questions) into word embeddings while also computing them regardless of the semantics. As a result, it can extract answers that suit the contexts of words used in the question as well as following the common usage of words across semantics. This improves the accuracy of answer selection. (2) Uses biLSTM to compute the embeddings of questions as well as those of the sentences often used to form answers (e.g. sentences representing conclusions or those supplementing the conclusions). It then simultaneously learns the optimum combination of those sentences as well as the closeness between the question and those sentences. As a result, our model can construct an answer that corresponds to the situation that underlies the question; it fills the gap between answer selection and generation and is the first model to move beyond the current simple answer selection model for non-factoid QAs. Evaluations using datasets created for love advice stored in the Japanese QA site, Oshiete goo, indicate that our model achieves 20 % higher accuracy in answer creation than the strong baselines. Our model is practical and has already been applied to the love advice service in Oshiete goo. | This paper proposes a neural architecture for answering non-factoid questions. The author*s model improves over previous neural models for answer sentence selection. Experiments are conducted on a Japanese love advice corpus; the coolest part of the paper for me was that the model was actually rolled out to the public and its answers were rated twice as good as actual human contributors! It was hard for me to determine the novelty of the contribution. The authors mention that their model *fills the gap between answer selection and generation*; however, no generation is actually performed by the model! Instead, the model appears to be very similar to the QA-LSTM of Tan et al., 2015 except that there are additional terms in the objective to handle conclusion and supplementary sentences. The structure of the answer is fixed to a predefined template (e.g., conclusion --> supplementary), so the model is not really learning how to order the sentences. The other contribution is the *word embedding with semantics* portion described in sec 4.1, which is essentially just the paragraph vector model except with *titles* and *categories* instead of paragraphs. While the result of the paper is a model that has actually demonstrated real-life usefulness, the technical contributions do not strike me as novel enough for publication at ICLR. Other comments: - One major issue with the reliance of the model on the template is that you can*t evaluate on commonly-used non-factoid QA datasets such as InsuranceQA. If the template were not fixed beforehand (but possibly learned by the model), you could conceivably evaluate on different datasets. - The examples in Table 4 don*t show a clear edge in answer quality to your model; QA-LSTM seems to choose good answers as well. - Doesn*t the construction model have an advantage over the vanilla QA-LSTM in that it knows which sentences are conclusions and which are supplementary? Or does QA-LSTM also get this distinction? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCAN AI GENERATE LOVE ADVICE?: TOWARD NEURAL ANSWER GENERATION FOR NON-FACTOID QUESTIONS\nDeep learning methods that extract answers for non-factoid questions from QA sites are seen as critical since they can assist users in reaching their next decisions through conversations with AI systems. The current methods, however, have the following two problems: (1) They can not understand the ambiguous use of words in the questions as word usage can strongly depend on the context (e.g. the word “relationship” has quite different meanings in the categories of Love advice and other categories). As a result, the accuracies of their answer selections are not good enough. (2) The current methods can only select from among the answers held by QA sites and can not generate new ones. Thus, they can not answer the questions that are somewhat different with those stored in QA sites. Our solution, Neural Answer Construction Model, tackles these problems as it: (1) Incorporates the biases of semantics behind questions (e.g. categories assigned to questions) into word embeddings while also computing them regardless of the semantics. As a result, it can extract answers that suit the contexts of words used in the question as well as following the common usage of words across semantics. This improves the accuracy of answer selection. (2) Uses biLSTM to compute the embeddings of questions as well as those of the sentences often used to form answers (e.g. sentences representing conclusions or those supplementing the conclusions). It then simultaneously learns the optimum combination of those sentences as well as the closeness between the question and those sentences. As a result, our model can construct an answer that corresponds to the situation that underlies the question; it fills the gap between answer selection and generation and is the first model to move beyond the current simple answer selection model for non-factoid QAs. Evaluations using datasets created for love advice stored in the Japanese QA site, Oshiete goo, indicate that our model achieves 20 % higher accuracy in answer creation than the strong baselines. Our model is practical and has already been applied to the love advice service in Oshiete goo.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a neural architecture for answering non-factoid questions. The author*s model improves over previous neural models for answer sentence selection. Experiments are conducted on a Japanese love advice corpus; the coolest part of the paper for me was that the model was actually rolled out to the public and its answers were rated twice as good as actual human contributors! It was hard for me to determine the novelty of the contribution. The authors mention that their model *fills the gap between answer selection and generation*; however, no generation is actually performed by the model! Instead, the model appears to be very similar to the QA-LSTM of Tan et al., 2015 except that there are additional terms in the objective to handle conclusion and supplementary sentences. The structure of the answer is fixed to a predefined template (e.g., conclusion --> supplementary), so the model is not really learning how to order the sentences. The other contribution is the *word embedding with semantics* portion described in sec 4.1, which is essentially just the paragraph vector model except with *titles* and *categories* instead of paragraphs. While the result of the paper is a model that has actually demonstrated real-life usefulness, the technical contributions do not strike me as novel enough for publication at ICLR. Other comments: - One major issue with the reliance of the model on the template is that you can*t evaluate on commonly-used non-factoid QA datasets such as InsuranceQA. If the template were not fixed beforehand (but possibly learned by the model), you could conceivably evaluate on different datasets. - The examples in Table 4 don*t show a clear edge in answer quality to your model; QA-LSTM seems to choose good answers as well. - Doesn*t the construction model have an advantage over the vanilla QA-LSTM in that it knows which sentences are conclusions and which are supplementary? Or does QA-LSTM also get this distinction?",
"role": "assistant"
}
]
| [
"This paper proposes a neural architecture for answering non-factoid questions.",
"The author*s model improves over previous neural models for answer sentence selection.",
"Experiments are conducted on a Japanese love advice corpus; the coolest part of the paper for me was that the model was actually rolled out to the public and its answers were rated twice as good as actual human contributors!",
"It was hard for me to determine the novelty of the contribution.",
"The authors mention that their model *fills the gap between answer selection and generation*; however, no generation is actually performed by the model!",
"Instead, the model appears to be very similar to the QA-LSTM of Tan et al., 2015 except that there are additional terms in the objective to handle conclusion and supplementary sentences.",
"The structure of the answer is fixed to a predefined template (e.g., conclusion --> supplementary), so the model is not really learning how to order the sentences.",
"The other contribution is the *word embedding with semantics* portion described in sec 4.1, which is essentially just the paragraph vector model except with *titles* and *categories* instead of paragraphs.",
"While the result of the paper is a model that has actually demonstrated real-life usefulness, the technical contributions do not strike me as novel enough for publication at ICLR.",
"Other comments: - One major issue with the reliance of the model on the template is that you can*t evaluate on commonly-used non-factoid QA datasets such as InsuranceQA.",
"If the template were not fixed beforehand (but possibly learned by the model), you could conceivably evaluate on different datasets.",
"- The examples in Table 4 don*t show a clear edge in answer quality to your model; QA-LSTM seems to choose good answers as well.",
"- Doesn*t the construction model have an advantage over the vanilla QA-LSTM in that it knows which sentences are conclusions and which are supplementary?",
"Or does QA-LSTM also get this distinction?"
] | {
"criticism": 5,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 12,
"praise": 1,
"presentation_and_reporting": 5,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 14
} | 0.357143 | 0.142857 | 0.071429 | 0.857143 | 0.071429 | 0.357143 | 0.285714 | 0.071429 | 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.214286 | 2.198503 | 0.015782 |
Generating Long and Diverse Responses with Neural Conversation Models | Building general-purpose conversation agents is a very challenging task, but necessary on the road toward intelligent agents that can interact with humans in natural language. Neural conversation models -- purely data-driven systems trained end-to-end on dialogue corpora -- have shown great promise recently, yet they often produce short and generic responses. This work presents new training and decoding methods that improve the quality, coherence, and diversity of long responses generated using sequence-to-sequence models. Our approach adds self-attention to the decoder to maintain coherence in longer responses, and we propose a practical approach, called the glimpse-model, for scaling to large datasets. We introduce a stochastic beam-search algorithm with segment-by-segment reranking which lets us inject diversity earlier in the generation process. We trained on a combined data set of over 2.3B conversation messages mined from the web. In human evaluation studies, our method produces longer responses overall, with a higher proportion rated as acceptable and excellent as length increases, compared to baseline sequence-to-sequence models with explicit length-promotion. A back-off strategy produces better responses overall, in the full spectrum of lengths. | This paper considers the problem of generating long and diverse responses in dialog systems. Two techniques are proposed to the seq-to-seq framework: (1) glimpse model that trains on fixed-length segments of the target side at a time, and (2) a segment-based stochastic decoding technique which injects diversity earlier in the generated responses. The large scale experiments on 2.3B conversation messages are quite impressive. Experiments on human evaluation should also be encouraged. With all these said, I am still not 100% convinced that machine generated long sequence is the right direction for dialog systems. As shown in Figure 3 (a), human evaluation shows that the proposed system is not significantly better than the baselines. I think more analysis and user preference mining should be done in the future to help us understand the nature of this problem. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGenerating Long and Diverse Responses with Neural Conversation Models\nBuilding general-purpose conversation agents is a very challenging task, but necessary on the road toward intelligent agents that can interact with humans in natural language. Neural conversation models -- purely data-driven systems trained end-to-end on dialogue corpora -- have shown great promise recently, yet they often produce short and generic responses. This work presents new training and decoding methods that improve the quality, coherence, and diversity of long responses generated using sequence-to-sequence models. Our approach adds self-attention to the decoder to maintain coherence in longer responses, and we propose a practical approach, called the glimpse-model, for scaling to large datasets. We introduce a stochastic beam-search algorithm with segment-by-segment reranking which lets us inject diversity earlier in the generation process. We trained on a combined data set of over 2.3B conversation messages mined from the web. In human evaluation studies, our method produces longer responses overall, with a higher proportion rated as acceptable and excellent as length increases, compared to baseline sequence-to-sequence models with explicit length-promotion. A back-off strategy produces better responses overall, in the full spectrum of lengths.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper considers the problem of generating long and diverse responses in dialog systems. Two techniques are proposed to the seq-to-seq framework: (1) glimpse model that trains on fixed-length segments of the target side at a time, and (2) a segment-based stochastic decoding technique which injects diversity earlier in the generated responses. The large scale experiments on 2.3B conversation messages are quite impressive. Experiments on human evaluation should also be encouraged. With all these said, I am still not 100% convinced that machine generated long sequence is the right direction for dialog systems. As shown in Figure 3 (a), human evaluation shows that the proposed system is not significantly better than the baselines. I think more analysis and user preference mining should be done in the future to help us understand the nature of this problem.",
"role": "assistant"
}
]
| [
"This paper considers the problem of generating long and diverse responses in dialog systems.",
"Two techniques are proposed to the seq-to-seq framework: (1) glimpse model that trains on fixed-length segments of the target side at a time, and (2) a segment-based stochastic decoding technique which injects diversity earlier in the generated responses.",
"The large scale experiments on 2.3B conversation messages are quite impressive.",
"Experiments on human evaluation should also be encouraged.",
"With all these said, I am still not 100% convinced that machine generated long sequence is the right direction for dialog systems.",
"As shown in Figure 3 (a), human evaluation shows that the proposed system is not significantly better than the baselines.",
"I think more analysis and user preference mining should be done in the future to help us understand the nature of this problem."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 7
} | 0.285714 | 0.142857 | 0.285714 | 0.857143 | 0.142857 | 0 | 0.142857 | 0.285714 | 7 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
Understanding intermediate layers using linear classifier probes | Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes**, where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. | This paper proposes a method that attempts to *understand* what is happening within a neural network by using linear classifier probes which are inserted at various levels of the network. I think the idea is nice overall because it allows network designers to better understand the representational power of each layer in the network, but at the same time, this works feels a bit rushed. In particular, the fact that the authors did not provide any results in *real* networks, which are used to win competitions makes the results less strong, since researchers who want to created competitive network architectures don*t have enough evidence from this work to decides whether they should use it or not. Ideally, I would encourage the authors to consider continuing this line of research and show how to use the information given by these linear classifiers to construct better network architectures. Unfortunately, as is, I don*t think we have enough novelty to justify accepting this work in the conference. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding intermediate layers using linear classifier probes\nNeural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes**, where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a method that attempts to *understand* what is happening within a neural network by using linear classifier probes which are inserted at various levels of the network. I think the idea is nice overall because it allows network designers to better understand the representational power of each layer in the network, but at the same time, this works feels a bit rushed. In particular, the fact that the authors did not provide any results in *real* networks, which are used to win competitions makes the results less strong, since researchers who want to created competitive network architectures don*t have enough evidence from this work to decides whether they should use it or not. Ideally, I would encourage the authors to consider continuing this line of research and show how to use the information given by these linear classifiers to construct better network architectures. Unfortunately, as is, I don*t think we have enough novelty to justify accepting this work in the conference.",
"role": "assistant"
}
]
| [
"This paper proposes a method that attempts to *understand* what is happening within a neural network by using linear classifier probes which are inserted at various levels of the network.",
"I think the idea is nice overall because it allows network designers to better understand the representational power of each layer in the network, but at the same time, this works feels a bit rushed.",
"In particular, the fact that the authors did not provide any results in *real* networks, which are used to win competitions makes the results less strong, since researchers who want to created competitive network architectures don*t have enough evidence from this work to decides whether they should use it or not.",
"Ideally, I would encourage the authors to consider continuing this line of research and show how to use the information given by these linear classifiers to construct better network architectures.",
"Unfortunately, as is, I don*t think we have enough novelty to justify accepting this work in the conference."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 5
} | 0.6 | 0 | 0.4 | 0.6 | 0.2 | 0 | 0.2 | 0.2 | 5 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.2 | 1.189924 | 1.010076 |
Understanding intermediate layers using linear classifier probes | Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes**, where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. | This paper proposes to use a linear classifier as the probe for the informativeness of the hidden activations from different neural network layers. The training of the linear classifier does not affect the training of the neural network. The paper is well motivated for investigating how much useful information (or how good the representations are) for each layer. The observations in this paper agrees with existing insights, such as, 1) (Fig 5a) too many random layers are harmful. 2) (Fig 5b) training is helpful. 3) (Fig 7) lower layers converge faster than higher layer. 4) (Fig 8) too deep network is hard to train, and skip link can remedy this problem. However, this paper has following problems: 1. It is not sufficiently justified why the linear classifier is a good probe. It is not crystal clear why good intermediate features need to show high linear classification accuracy. More theoretical analysis and/or intuition will be helpful. 2. This paper does not provide much insight on how to design better networks based on the observations. Designing a better network is also the best way to justify the usefulness of the analysis. Overall, this paper is tackling an interesting problem, but the technique (the linear classifier as the probe) is not novel and more importantly need to be better justified. Moreover, it is important to show how to design better neural networks using the observations in this paper. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding intermediate layers using linear classifier probes\nNeural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes**, where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to use a linear classifier as the probe for the informativeness of the hidden activations from different neural network layers. The training of the linear classifier does not affect the training of the neural network. The paper is well motivated for investigating how much useful information (or how good the representations are) for each layer. The observations in this paper agrees with existing insights, such as, 1) (Fig 5a) too many random layers are harmful. 2) (Fig 5b) training is helpful. 3) (Fig 7) lower layers converge faster than higher layer. 4) (Fig 8) too deep network is hard to train, and skip link can remedy this problem. However, this paper has following problems: 1. It is not sufficiently justified why the linear classifier is a good probe. It is not crystal clear why good intermediate features need to show high linear classification accuracy. More theoretical analysis and/or intuition will be helpful. 2. This paper does not provide much insight on how to design better networks based on the observations. Designing a better network is also the best way to justify the usefulness of the analysis. Overall, this paper is tackling an interesting problem, but the technique (the linear classifier as the probe) is not novel and more importantly need to be better justified. Moreover, it is important to show how to design better neural networks using the observations in this paper.",
"role": "assistant"
}
]
| [
"This paper proposes to use a linear classifier as the probe for the informativeness of the hidden activations from different neural network layers.",
"The training of the linear classifier does not affect the training of the neural network.",
"The paper is well motivated for investigating how much useful information (or how good the representations are) for each layer.",
"The observations in this paper agrees with existing insights, such as, 1) (Fig 5a) too many random layers are harmful.",
"2) (Fig 5b) training is helpful.",
"3) (Fig 7) lower layers converge faster than higher layer.",
"4) (Fig 8) too deep network is hard to train, and skip link can remedy this problem.",
"However, this paper has following problems: 1.",
"It is not sufficiently justified why the linear classifier is a good probe.",
"It is not crystal clear why good intermediate features need to show high linear classification accuracy.",
"More theoretical analysis and/or intuition will be helpful.",
"2.",
"This paper does not provide much insight on how to design better networks based on the observations.",
"Designing a better network is also the best way to justify the usefulness of the analysis.",
"Overall, this paper is tackling an interesting problem, but the technique (the linear classifier as the probe) is not novel and more importantly need to be better justified.",
"Moreover, it is important to show how to design better neural networks using the observations in this paper."
] | {
"criticism": 6,
"example": 4,
"importance_and_relevance": 5,
"materials_and_methods": 10,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 2,
"suggestion_and_solution": 4,
"total": 16
} | 0.375 | 0.25 | 0.3125 | 0.625 | 0.1875 | 0.1875 | 0.125 | 0.25 | 16 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.3125 | 2.170458 | 0.142042 |
Understanding intermediate layers using linear classifier probes | Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes**, where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. | The authors propose a method to investigate the predictiveness of intermediate layer activations. To do so, they propose training linear classifiers and evaluate the error on the test set. The paper is well motivated and aims to shed some light onto the progress of model training and hopes to provide insights into deep learning architecture design. The two main reasons for why the authors decided to use linear probes seem to be: - convexity - The last layer in the network is (usually) linear In the second to last paragraph of page 4 the authors point out that it could happen that the intermediate features are useless for a linear classifier. This is correct and what I consider the main flaw of the paper. I am missing any motivation as to the usefulness of the suggested analysis to architecture design. In fact, the example with the skip connection (Figure 8) seems to suggest that skip connections shouldn*t be used. Doesn*t that contradict the recent successes of ResNet? While the results are interesting, they aren*t particularly surprising and I am failing to see direct applicability to understanding deep models as the authors suggest. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnderstanding intermediate layers using linear classifier probes\nNeural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes**, where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a method to investigate the predictiveness of intermediate layer activations. To do so, they propose training linear classifiers and evaluate the error on the test set. The paper is well motivated and aims to shed some light onto the progress of model training and hopes to provide insights into deep learning architecture design. The two main reasons for why the authors decided to use linear probes seem to be: - convexity - The last layer in the network is (usually) linear In the second to last paragraph of page 4 the authors point out that it could happen that the intermediate features are useless for a linear classifier. This is correct and what I consider the main flaw of the paper. I am missing any motivation as to the usefulness of the suggested analysis to architecture design. In fact, the example with the skip connection (Figure 8) seems to suggest that skip connections shouldn*t be used. Doesn*t that contradict the recent successes of ResNet? While the results are interesting, they aren*t particularly surprising and I am failing to see direct applicability to understanding deep models as the authors suggest.",
"role": "assistant"
}
]
| [
"The authors propose a method to investigate the predictiveness of intermediate layer activations.",
"To do so, they propose training linear classifiers and evaluate the error on the test set.",
"The paper is well motivated and aims to shed some light onto the progress of model training and hopes to provide insights into deep learning architecture design.",
"The two main reasons for why the authors decided to use linear probes seem to be: - convexity - The last layer in the network is (usually) linear In the second to last paragraph of page 4 the authors point out that it could happen that the intermediate features are useless for a linear classifier.",
"This is correct and what I consider the main flaw of the paper.",
"I am missing any motivation as to the usefulness of the suggested analysis to architecture design.",
"In fact, the example with the skip connection (Figure 8) seems to suggest that skip connections shouldn*t be used.",
"Doesn*t that contradict the recent successes of ResNet?",
"While the results are interesting, they aren*t particularly surprising and I am failing to see direct applicability to understanding deep models as the authors suggest."
] | {
"criticism": 4,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 9
} | 0.444444 | 0.222222 | 0.333333 | 0.666667 | 0.333333 | 0.222222 | 0.222222 | 0.222222 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.666667 | 2.414148 | 0.252519 |
Improving Invariance and Equivariance Properties of Convolutional Neural Networks | Convolutional Neural Networks (CNNs) learn highly discriminative representations from data, but how robust and structured are these representations? How does the data shape the internal network representation? We shed light on these questions by empirically measuring the invariance and equivariance properties of a large number of CNNs trained with various types of input transformations. We find that CNNs learn invariance wrt all 9 tested transformation types and that invariance extends to transformations outside the training range. We also measure the distance between CNN representations and show that similar input transformations lead to more similar internal representations. Transforms can be grouped by the way they affect the learned representation. Additionally, we also propose a loss function that aims to improve CNN equivariance. | This work presents an empirical study of the influence of different types of data augmentation on the performance of CNNs. It also proposes to incorporate additional loss functions to encourage approximate invariance or equivariance, and shows there are some benefits. The paper reads well and the objectives are clear. The study of invariances in CNNs is a very important topic, and advances in this area are greatly appreciated. The paper splits itself in two very different parts -- the empirical study of equivariances in existing CNNs, and the proposal of equivariance objectives. However, taken separately each of these two parts could be better executed. On the empirical study, its breath is relatively limited, and it*s hard to draw any far-reaching conclusions from it: - Only one network is studied; at least one other architecture would have made for better generalization. - Only one layer (fc7) is studied; this presents issues as the top layer is the most invariant. At least one convolutional layer (possibly more) should have been considered. - The reliance on the scanned text dataset does not help; however the ImageNet results are definitely very encouraging. It is nice to see how performance degrades with the degree of transformations, and the authors do interpret the results, but it would be better to see more analysis. There is only a limited set of conclusions that can be drawn from evaluating networks with jittered data. If the authors could propose some other interesting ways to assess the invariance and equivariance, they would potentially draw more insightful conclusions from it. On the proposed loss function, only a very quick treatment of it is given (Section 4, half a page). It does not differ too much from known invariance/equivariance objectives studied in the literature previously, e.g. Decoste and Scholkopf, *Training Invariant Support Vector Machines*, Machine Learning, 2002. I*m not sure that dividing the paper into these two different contributions is the best approach; they both feel a bit incomplete, and a full treatment of only one of them would make for an overall better paper. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nImproving Invariance and Equivariance Properties of Convolutional Neural Networks\nConvolutional Neural Networks (CNNs) learn highly discriminative representations from data, but how robust and structured are these representations? How does the data shape the internal network representation? We shed light on these questions by empirically measuring the invariance and equivariance properties of a large number of CNNs trained with various types of input transformations. We find that CNNs learn invariance wrt all 9 tested transformation types and that invariance extends to transformations outside the training range. We also measure the distance between CNN representations and show that similar input transformations lead to more similar internal representations. Transforms can be grouped by the way they affect the learned representation. Additionally, we also propose a loss function that aims to improve CNN equivariance.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work presents an empirical study of the influence of different types of data augmentation on the performance of CNNs. It also proposes to incorporate additional loss functions to encourage approximate invariance or equivariance, and shows there are some benefits. The paper reads well and the objectives are clear. The study of invariances in CNNs is a very important topic, and advances in this area are greatly appreciated. The paper splits itself in two very different parts -- the empirical study of equivariances in existing CNNs, and the proposal of equivariance objectives. However, taken separately each of these two parts could be better executed. On the empirical study, its breath is relatively limited, and it*s hard to draw any far-reaching conclusions from it: - Only one network is studied; at least one other architecture would have made for better generalization. - Only one layer (fc7) is studied; this presents issues as the top layer is the most invariant. At least one convolutional layer (possibly more) should have been considered. - The reliance on the scanned text dataset does not help; however the ImageNet results are definitely very encouraging. It is nice to see how performance degrades with the degree of transformations, and the authors do interpret the results, but it would be better to see more analysis. There is only a limited set of conclusions that can be drawn from evaluating networks with jittered data. If the authors could propose some other interesting ways to assess the invariance and equivariance, they would potentially draw more insightful conclusions from it. On the proposed loss function, only a very quick treatment of it is given (Section 4, half a page). It does not differ too much from known invariance/equivariance objectives studied in the literature previously, e.g. Decoste and Scholkopf, *Training Invariant Support Vector Machines*, Machine Learning, 2002. I*m not sure that dividing the paper into these two different contributions is the best approach; they both feel a bit incomplete, and a full treatment of only one of them would make for an overall better paper.",
"role": "assistant"
}
]
| [
"This work presents an empirical study of the influence of different types of data augmentation on the performance of CNNs.",
"It also proposes to incorporate additional loss functions to encourage approximate invariance or equivariance, and shows there are some benefits.",
"The paper reads well and the objectives are clear.",
"The study of invariances in CNNs is a very important topic, and advances in this area are greatly appreciated.",
"The paper splits itself in two very different parts -- the empirical study of equivariances in existing CNNs, and the proposal of equivariance objectives.",
"However, taken separately each of these two parts could be better executed.",
"On the empirical study, its breath is relatively limited, and it*s hard to draw any far-reaching conclusions from it: - Only one network is studied; at least one other architecture would have made for better generalization.",
"- Only one layer (fc7) is studied; this presents issues as the top layer is the most invariant.",
"At least one convolutional layer (possibly more) should have been considered.",
"- The reliance on the scanned text dataset does not help; however the ImageNet results are definitely very encouraging.",
"It is nice to see how performance degrades with the degree of transformations, and the authors do interpret the results, but it would be better to see more analysis.",
"There is only a limited set of conclusions that can be drawn from evaluating networks with jittered data.",
"If the authors could propose some other interesting ways to assess the invariance and equivariance, they would potentially draw more insightful conclusions from it.",
"On the proposed loss function, only a very quick treatment of it is given (Section 4, half a page).",
"It does not differ too much from known invariance/equivariance objectives studied in the literature previously, e.g.",
"Decoste and Scholkopf, *Training Invariant Support Vector Machines*, Machine Learning, 2002.",
"I*m not sure that dividing the paper into these two different contributions is the best approach; they both feel a bit incomplete, and a full treatment of only one of them would make for an overall better paper."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 14,
"praise": 5,
"presentation_and_reporting": 3,
"results_and_discussion": 9,
"suggestion_and_solution": 6,
"total": 17
} | 0.176471 | 0 | 0.176471 | 0.823529 | 0.294118 | 0.176471 | 0.529412 | 0.352941 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.529412 | 2.276893 | 0.252519 |
Improving Invariance and Equivariance Properties of Convolutional Neural Networks | Convolutional Neural Networks (CNNs) learn highly discriminative representations from data, but how robust and structured are these representations? How does the data shape the internal network representation? We shed light on these questions by empirically measuring the invariance and equivariance properties of a large number of CNNs trained with various types of input transformations. We find that CNNs learn invariance wrt all 9 tested transformation types and that invariance extends to transformations outside the training range. We also measure the distance between CNN representations and show that similar input transformations lead to more similar internal representations. Transforms can be grouped by the way they affect the learned representation. Additionally, we also propose a loss function that aims to improve CNN equivariance. | This paper is an extension of Lenc&Vedaldi15 paper, showing CNN representations at FC7 layer are to certain extent equivariant to various classes of transformations and that training with a certain group of transformation makes the representations more equivalent. Authors performed a large amount of experiments, training over 30 networks with different forms of jitter, which is quite impressive. However it is rather difficult to find a main message of this work. Yes, authors measured the properties on a different layer than the Lenc&Vedaldi15, however it is hard to find some novel insights other than the known fact that jitter helps to achieve invariance. The evaluation seems to be mostly correct, however the paper does not seem to be solving the task advertised in its title really well. Major issues are in the experiments with the representation distances: * The selection of only FC7 is a bit controversial - it is followed only by a single classification layer (FC8) to the common output - class likelyhoods. Because the FC8 is just a linear projections, what the equivalence map does is just to re-project the FC8 weights of the attached network to the weights of the original network. Probably performing similar experiments but on more layers may be more useful (as the networks are already trained). * The experiment with representation distance is missing what is the classification error on the testing dataset. This would answer whether the representations are actually compatible up to linear transformation at all... * It is not clear for the experiment with K-NN whether this is measured per each test set example? After training the equivalence map? More clear would be to show that networks trained on similar group of jitter transformations are more compatible on the target task. * The proposed method does not seem to improve equivariance consistently on all tasks. Especially with lambda_1 and lambda_2 having such small values, the loss is basically equal to simple data jitter as it just adds up the loss of the original and transformed image. Maybe the issue is in the selection of the FC7 layer? In general, this paper shows some interesting results on the FC7 equivariance, but it does not seem to be drawing many interesting new observations out of these experiments. Due to some issues with the equivalence experiments and the finetuning of equivariance, I would not recommend acceptance of this manuscript. However, refining the experiments on already trained networks and restructuring this manuscript into more investigative work may lead to interesting contribution to the field. There are also few minor issues: * It is not experimentally verified that the new criterion for equivariance mapping helps to gain better results. * The angles on page 1 and 5 are missing units (degrees?). * On page three, *In practice, it is difficult... *, it is not M_g which is maximised/minimised, but the loss over the M_g * Page 4, footnote 2 - if you are just halving the activations, it is hard to call it a dropout as this constant factor can be passed to the following/preceding weights * Is the network for RVL-CDIP the same architecture as Alexnet? * On page 7, Figure 3a+3b - in my opinion, turning the diagonal elements to white is really misleading, and probably even incorrect, as the distance between the same representations should be zero (which is also a way how to verify that the experiments are performed correctly). | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nImproving Invariance and Equivariance Properties of Convolutional Neural Networks\nConvolutional Neural Networks (CNNs) learn highly discriminative representations from data, but how robust and structured are these representations? How does the data shape the internal network representation? We shed light on these questions by empirically measuring the invariance and equivariance properties of a large number of CNNs trained with various types of input transformations. We find that CNNs learn invariance wrt all 9 tested transformation types and that invariance extends to transformations outside the training range. We also measure the distance between CNN representations and show that similar input transformations lead to more similar internal representations. Transforms can be grouped by the way they affect the learned representation. Additionally, we also propose a loss function that aims to improve CNN equivariance.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper is an extension of Lenc&Vedaldi15 paper, showing CNN representations at FC7 layer are to certain extent equivariant to various classes of transformations and that training with a certain group of transformation makes the representations more equivalent. Authors performed a large amount of experiments, training over 30 networks with different forms of jitter, which is quite impressive. However it is rather difficult to find a main message of this work. Yes, authors measured the properties on a different layer than the Lenc&Vedaldi15, however it is hard to find some novel insights other than the known fact that jitter helps to achieve invariance. The evaluation seems to be mostly correct, however the paper does not seem to be solving the task advertised in its title really well. Major issues are in the experiments with the representation distances: * The selection of only FC7 is a bit controversial - it is followed only by a single classification layer (FC8) to the common output - class likelyhoods. Because the FC8 is just a linear projections, what the equivalence map does is just to re-project the FC8 weights of the attached network to the weights of the original network. Probably performing similar experiments but on more layers may be more useful (as the networks are already trained). * The experiment with representation distance is missing what is the classification error on the testing dataset. This would answer whether the representations are actually compatible up to linear transformation at all... * It is not clear for the experiment with K-NN whether this is measured per each test set example? After training the equivalence map? More clear would be to show that networks trained on similar group of jitter transformations are more compatible on the target task. * The proposed method does not seem to improve equivariance consistently on all tasks. Especially with lambda_1 and lambda_2 having such small values, the loss is basically equal to simple data jitter as it just adds up the loss of the original and transformed image. Maybe the issue is in the selection of the FC7 layer? In general, this paper shows some interesting results on the FC7 equivariance, but it does not seem to be drawing many interesting new observations out of these experiments. Due to some issues with the equivalence experiments and the finetuning of equivariance, I would not recommend acceptance of this manuscript. However, refining the experiments on already trained networks and restructuring this manuscript into more investigative work may lead to interesting contribution to the field. There are also few minor issues: * It is not experimentally verified that the new criterion for equivariance mapping helps to gain better results. * The angles on page 1 and 5 are missing units (degrees?). * On page three, *In practice, it is difficult... *, it is not M_g which is maximised/minimised, but the loss over the M_g * Page 4, footnote 2 - if you are just halving the activations, it is hard to call it a dropout as this constant factor can be passed to the following/preceding weights * Is the network for RVL-CDIP the same architecture as Alexnet? * On page 7, Figure 3a+3b - in my opinion, turning the diagonal elements to white is really misleading, and probably even incorrect, as the distance between the same representations should be zero (which is also a way how to verify that the experiments are performed correctly).",
"role": "assistant"
}
]
| [
"This paper is an extension of Lenc&Vedaldi15 paper, showing CNN representations at FC7 layer are to certain extent equivariant to various classes of transformations and that training with a certain group of transformation makes the representations more equivalent.",
"Authors performed a large amount of experiments, training over 30 networks with different forms of jitter, which is quite impressive.",
"However it is rather difficult to find a main message of this work.",
"Yes, authors measured the properties on a different layer than the Lenc&Vedaldi15, however it is hard to find some novel insights other than the known fact that jitter helps to achieve invariance.",
"The evaluation seems to be mostly correct, however the paper does not seem to be solving the task advertised in its title really well.",
"Major issues are in the experiments with the representation distances: * The selection of only FC7 is a bit controversial - it is followed only by a single classification layer (FC8) to the common output - class likelyhoods.",
"Because the FC8 is just a linear projections, what the equivalence map does is just to re-project the FC8 weights of the attached network to the weights of the original network.",
"Probably performing similar experiments but on more layers may be more useful (as the networks are already trained).",
"* The experiment with representation distance is missing what is the classification error on the testing dataset.",
"This would answer whether the representations are actually compatible up to linear transformation at all... * It is not clear for the experiment with K-NN whether this is measured per each test set example?",
"After training the equivalence map?",
"More clear would be to show that networks trained on similar group of jitter transformations are more compatible on the target task.",
"* The proposed method does not seem to improve equivariance consistently on all tasks.",
"Especially with lambda_1 and lambda_2 having such small values, the loss is basically equal to simple data jitter as it just adds up the loss of the original and transformed image.",
"Maybe the issue is in the selection of the FC7 layer?",
"In general, this paper shows some interesting results on the FC7 equivariance, but it does not seem to be drawing many interesting new observations out of these experiments.",
"Due to some issues with the equivalence experiments and the finetuning of equivariance, I would not recommend acceptance of this manuscript.",
"However, refining the experiments on already trained networks and restructuring this manuscript into more investigative work may lead to interesting contribution to the field.",
"There are also few minor issues: * It is not experimentally verified that the new criterion for equivariance mapping helps to gain better results.",
"* The angles on page 1 and 5 are missing units (degrees?).",
"* On page three, *In practice, it is difficult... *, it is not M_g which is maximised/minimised, but the loss over the M_g * Page 4, footnote 2 - if you are just halving the activations, it is hard to call it a dropout as this constant factor can be passed to the following/preceding weights * Is the network for RVL-CDIP the same architecture as Alexnet?",
"* On page 7, Figure 3a+3b - in my opinion, turning the diagonal elements to white is really misleading, and probably even incorrect, as the distance between the same representations should be zero (which is also a way how to verify that the experiments are performed correctly)."
] | {
"criticism": 12,
"example": 3,
"importance_and_relevance": 4,
"materials_and_methods": 18,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 22
} | 0.545455 | 0.136364 | 0.181818 | 0.818182 | 0.136364 | 0.136364 | 0.181818 | 0.136364 | 22 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.272727 | 0.99435 | 1.278377 |
Intelligible Language Modeling with Input Switched Affine Networks | The computational mechanisms by which nonlinear recurrent neural networks (RNNs) achieve their goals remains an open question. There exist many problem domains where intelligibility of the network model is crucial for deployment. Here we introduce a recurrent architecture composed of input-switched affine transformations, in other words an RNN without any nonlinearity and with one set of weights per input. We show that this architecture achieves near identical performance to traditional architectures on language modeling of Wikipedia text, for the same number of model parameters. It can obtain this performance with the potential for computational speedup compared to existing methods, by precomputing the composed affine transformations corresponding to longer input sequences. As our architecture is affine, we are able to understand the mechanisms by which it functions using linear methods. For example, we show how the network linearly combines contributions from the past to make predictions at the current time step. We show how representations for words can be combined in order to understand how context is transferred across word boundaries. Finally, we demonstrate how the system can be executed and analyzed in arbitrary bases to aid understanding. | Summary: The authors propose an input switched affine network to do character-level language modeling, a kind of RNN without pointwise nonlinearity, but with switching the transition matrix & bias based on the input character. This is motivated by intelligibility, since it allows decomposition of output contribution into these kappa_s^t terms, and use of basic linear algebra to probe the network. Regarding myself as a reviewer, I am quite sure I understood the main ideas and arguments of this paper, but am not an expert on RNN language models or intelligibility/interpretability in ML. I did not read any papers with a similar premise - closest related work I*m familiar with would be deconvnet for insight into vision-CNNs. PRO: I think this is original and novel work. This work is high quality, well written, and clearly is the result of a lot of work. I found section 4.5 about projecting into readout subspace vs *computational* subspace most interesting and meaningful. CON: + The main hesitation I have is that the results on both parts (ISAN model, and analysis of it) are not entirely convincing: (1) ISAN is only trained on small task (text8), not clear whether it can be a strong char-LM on larger scale tasks, (2) nor do the analysis sections provide all that much real insight in the learned network. (1b) Other caveat towards ISAN architecture: this model in its proposed form is really only fit for small-vocabulary (i.e. character-based) language modeling, not a general RNN with large-vocab discrete input nor continuous input. (2a) For analysis: many cute plots and fun ideas of quantities to look at, but not much concrete insights. (2b) Not very clear which analysis is specific to the ISAN model, and which ideas will generalize to general nonlinear RNNs. (2c) Re sec 4.2 - 4.3: It seems that the quantity kappa_s^t on which analysis rests, isn*t all that meaningful. Elaborating a bit on what I wrote in the question: For example: Fig 2, for input letter *u* in revenue, there*s a red spot where *_* character massively positively impacts the logit of *e*. This seems quite meaningless, what would be the meaning of influence of *_* character? So it looks ot me that the switching matrix W_u (and prior W_n W_e etc) are using previous state in an interesting way to produce that following e. So that metric kappa_s^t just doesn*t seem very meaningful. This remark relates to the last paragraph of Sec4.2. Even though the list of cons here is longer than pro*s, I recommend accept; specifically because the originality of this work will in any case make it more vulnerable to critiques. This work is well-motivated, very well-executed, and can inspire many more interesting investigations along these lines. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIntelligible Language Modeling with Input Switched Affine Networks\nThe computational mechanisms by which nonlinear recurrent neural networks (RNNs) achieve their goals remains an open question. There exist many problem domains where intelligibility of the network model is crucial for deployment. Here we introduce a recurrent architecture composed of input-switched affine transformations, in other words an RNN without any nonlinearity and with one set of weights per input. We show that this architecture achieves near identical performance to traditional architectures on language modeling of Wikipedia text, for the same number of model parameters. It can obtain this performance with the potential for computational speedup compared to existing methods, by precomputing the composed affine transformations corresponding to longer input sequences. As our architecture is affine, we are able to understand the mechanisms by which it functions using linear methods. For example, we show how the network linearly combines contributions from the past to make predictions at the current time step. We show how representations for words can be combined in order to understand how context is transferred across word boundaries. Finally, we demonstrate how the system can be executed and analyzed in arbitrary bases to aid understanding.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nSummary: The authors propose an input switched affine network to do character-level language modeling, a kind of RNN without pointwise nonlinearity, but with switching the transition matrix & bias based on the input character. This is motivated by intelligibility, since it allows decomposition of output contribution into these kappa_s^t terms, and use of basic linear algebra to probe the network. Regarding myself as a reviewer, I am quite sure I understood the main ideas and arguments of this paper, but am not an expert on RNN language models or intelligibility/interpretability in ML. I did not read any papers with a similar premise - closest related work I*m familiar with would be deconvnet for insight into vision-CNNs. PRO: I think this is original and novel work. This work is high quality, well written, and clearly is the result of a lot of work. I found section 4.5 about projecting into readout subspace vs *computational* subspace most interesting and meaningful. CON: + The main hesitation I have is that the results on both parts (ISAN model, and analysis of it) are not entirely convincing: (1) ISAN is only trained on small task (text8), not clear whether it can be a strong char-LM on larger scale tasks, (2) nor do the analysis sections provide all that much real insight in the learned network. (1b) Other caveat towards ISAN architecture: this model in its proposed form is really only fit for small-vocabulary (i.e. character-based) language modeling, not a general RNN with large-vocab discrete input nor continuous input. (2a) For analysis: many cute plots and fun ideas of quantities to look at, but not much concrete insights. (2b) Not very clear which analysis is specific to the ISAN model, and which ideas will generalize to general nonlinear RNNs. (2c) Re sec 4.2 - 4.3: It seems that the quantity kappa_s^t on which analysis rests, isn*t all that meaningful. Elaborating a bit on what I wrote in the question: For example: Fig 2, for input letter *u* in revenue, there*s a red spot where *_* character massively positively impacts the logit of *e*. This seems quite meaningless, what would be the meaning of influence of *_* character? So it looks ot me that the switching matrix W_u (and prior W_n W_e etc) are using previous state in an interesting way to produce that following e. So that metric kappa_s^t just doesn*t seem very meaningful. This remark relates to the last paragraph of Sec4.2. Even though the list of cons here is longer than pro*s, I recommend accept; specifically because the originality of this work will in any case make it more vulnerable to critiques. This work is well-motivated, very well-executed, and can inspire many more interesting investigations along these lines.",
"role": "assistant"
}
]
| [
"Summary: The authors propose an input switched affine network to do character-level language modeling, a kind of RNN without pointwise nonlinearity, but with switching the transition matrix & bias based on the input character.",
"This is motivated by intelligibility, since it allows decomposition of output contribution into these kappa_s^t terms, and use of basic linear algebra to probe the network.",
"Regarding myself as a reviewer, I am quite sure I understood the main ideas and arguments of this paper, but am not an expert on RNN language models or intelligibility/interpretability in ML.",
"I did not read any papers with a similar premise - closest related work I*m familiar with would be deconvnet for insight into vision-CNNs.",
"PRO: I think this is original and novel work.",
"This work is high quality, well written, and clearly is the result of a lot of work.",
"I found section 4.5 about projecting into readout subspace vs *computational* subspace most interesting and meaningful.",
"CON: + The main hesitation I have is that the results on both parts (ISAN model, and analysis of it) are not entirely convincing: (1) ISAN is only trained on small task (text8), not clear whether it can be a strong char-LM on larger scale tasks, (2) nor do the analysis sections provide all that much real insight in the learned network.",
"(1b) Other caveat towards ISAN architecture: this model in its proposed form is really only fit for small-vocabulary (i.e.",
"character-based) language modeling, not a general RNN with large-vocab discrete input nor continuous input.",
"(2a) For analysis: many cute plots and fun ideas of quantities to look at, but not much concrete insights.",
"(2b) Not very clear which analysis is specific to the ISAN model, and which ideas will generalize to general nonlinear RNNs.",
"(2c) Re sec 4.2 - 4.3: It seems that the quantity kappa_s^t on which analysis rests, isn*t all that meaningful.",
"Elaborating a bit on what I wrote in the question: For example: Fig 2, for input letter *u* in revenue, there*s a red spot where *_* character massively positively impacts the logit of *e*.",
"This seems quite meaningless, what would be the meaning of influence of *_* character?",
"So it looks ot me that the switching matrix W_u (and prior W_n W_e etc) are using previous state in an interesting way to produce that following e. So that metric kappa_s^t just doesn*t seem very meaningful.",
"This remark relates to the last paragraph of Sec4.2.",
"Even though the list of cons here is longer than pro*s, I recommend accept; specifically because the originality of this work will in any case make it more vulnerable to critiques.",
"This work is well-motivated, very well-executed, and can inspire many more interesting investigations along these lines."
] | {
"criticism": 9,
"example": 4,
"importance_and_relevance": 8,
"materials_and_methods": 8,
"praise": 5,
"presentation_and_reporting": 6,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 19
} | 0.473684 | 0.210526 | 0.421053 | 0.421053 | 0.263158 | 0.315789 | 0.210526 | 0.052632 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.368421 | 1.800254 | 0.568167 |
Sequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks | The purpose of this study is to explore the feasibility and potential benefits of using a physiological plausible model of handwriting as a feature representation for sequence generation with recurrent mixture density networks. We build on recent results in handwriting prediction developed by Graves (2013), and we focus on generating sequences that possess the statistical and dynamic qualities of handwriting and calligraphic art forms. Rather than model raw sequence data, we first preprocess and reconstruct the input training data with a concise representation given by a motor plan (in the form of a coarse sequence of `ballistic* targets) and corresponding dynamic parameters (which define the velocity and curvature of the pen-tip trajectory). This representation provides a number of advantages, such as enabling the system to learn from very few examples by introducing artificial variability in the training data, and mixing of visual and dynamic qualities learned from different datasets. | This paper has no machine learning algorithmic contribution: it just uses the the same combination of LSTM and bivariate mixture density network as Graves, and the detailed explanation in the appendix even misses one key essential point: how are the Gaussian parameters obtained as a transformation of the output of the LSTM. There are also no numerical evaluation suggesting that the algorithm is some form of improvement over the state-of-the-art. So I do not think such a paper is appropriate for a conference like ICLR. The part describing the handwriting tasks and the data transformation is well written and interesting to read, it could be valuable work for a conference more focused on handwriting recognition, but I am no expert in the field. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks\nThe purpose of this study is to explore the feasibility and potential benefits of using a physiological plausible model of handwriting as a feature representation for sequence generation with recurrent mixture density networks. We build on recent results in handwriting prediction developed by Graves (2013), and we focus on generating sequences that possess the statistical and dynamic qualities of handwriting and calligraphic art forms. Rather than model raw sequence data, we first preprocess and reconstruct the input training data with a concise representation given by a motor plan (in the form of a coarse sequence of `ballistic* targets) and corresponding dynamic parameters (which define the velocity and curvature of the pen-tip trajectory). This representation provides a number of advantages, such as enabling the system to learn from very few examples by introducing artificial variability in the training data, and mixing of visual and dynamic qualities learned from different datasets.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper has no machine learning algorithmic contribution: it just uses the the same combination of LSTM and bivariate mixture density network as Graves, and the detailed explanation in the appendix even misses one key essential point: how are the Gaussian parameters obtained as a transformation of the output of the LSTM. There are also no numerical evaluation suggesting that the algorithm is some form of improvement over the state-of-the-art. So I do not think such a paper is appropriate for a conference like ICLR. The part describing the handwriting tasks and the data transformation is well written and interesting to read, it could be valuable work for a conference more focused on handwriting recognition, but I am no expert in the field.",
"role": "assistant"
}
]
| [
"This paper has no machine learning algorithmic contribution: it just uses the the same combination of LSTM and bivariate mixture density network as Graves, and the detailed explanation in the appendix even misses one key essential point: how are the Gaussian parameters obtained as a transformation of the output of the LSTM.",
"There are also no numerical evaluation suggesting that the algorithm is some form of improvement over the state-of-the-art.",
"So I do not think such a paper is appropriate for a conference like ICLR.",
"The part describing the handwriting tasks and the data transformation is well written and interesting to read, it could be valuable work for a conference more focused on handwriting recognition, but I am no expert in the field."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 4
} | 1 | 0 | 0.25 | 0.75 | 0.25 | 0.25 | 0.25 | 0.25 | 4 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3 | 1.721623 | 1.278377 |
Out-of-class novelty generation: an experimental foundation | Recent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty. | First, the bad: This paper is frustratingly written. The grammar is fine, but: - The first four pages are completely theoretical and difficult to follow without any concrete examples. These sections would benefit greatly from a common example woven through the different aspects of the theoretical discussion. - The ordering of the exposition is also frustrating. I found myself constantly having to refer ahead to figures and back to details that were important but seemingly presented out of order. Perhaps a reordering of some details could fix this. Recommendation: give the most naturally ordered oral presentation of the work and then order the paper similarly. Finally, the description of the experiments is cursory, and I found myself wondering whether the details omitted were important or not. Including experimental details in a supplementary section could help assuage these fears. The good: What the paper does well is to gather together past work on novelty generation and propose a unified framework in which to evaluate past and future models. This is done by repurposing existing generative model evaluation metrics for the task of evaluating novelty. The experiments are basic, but even the basic experiments go beyond previous work in this area (to this reviewer’s knowledge). Overall I recommend the paper be accepted, but I strongly recommend rewriting some components to make it more digestible. As with other novelty papers, it would be read thoroughly by the interested few, but it is likely to fight an uphill battle against the majority of readers outside the sub-sub-field of novelty generation; for this reason the theory should be made even more intuitive and clear and the experiments and results even more accessible. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOut-of-class novelty generation: an experimental foundation\nRecent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nFirst, the bad: This paper is frustratingly written. The grammar is fine, but: - The first four pages are completely theoretical and difficult to follow without any concrete examples. These sections would benefit greatly from a common example woven through the different aspects of the theoretical discussion. - The ordering of the exposition is also frustrating. I found myself constantly having to refer ahead to figures and back to details that were important but seemingly presented out of order. Perhaps a reordering of some details could fix this. Recommendation: give the most naturally ordered oral presentation of the work and then order the paper similarly. Finally, the description of the experiments is cursory, and I found myself wondering whether the details omitted were important or not. Including experimental details in a supplementary section could help assuage these fears. The good: What the paper does well is to gather together past work on novelty generation and propose a unified framework in which to evaluate past and future models. This is done by repurposing existing generative model evaluation metrics for the task of evaluating novelty. The experiments are basic, but even the basic experiments go beyond previous work in this area (to this reviewer’s knowledge). Overall I recommend the paper be accepted, but I strongly recommend rewriting some components to make it more digestible. As with other novelty papers, it would be read thoroughly by the interested few, but it is likely to fight an uphill battle against the majority of readers outside the sub-sub-field of novelty generation; for this reason the theory should be made even more intuitive and clear and the experiments and results even more accessible.",
"role": "assistant"
}
]
| [
"First, the bad: This paper is frustratingly written.",
"The grammar is fine, but: - The first four pages are completely theoretical and difficult to follow without any concrete examples.",
"These sections would benefit greatly from a common example woven through the different aspects of the theoretical discussion.",
"- The ordering of the exposition is also frustrating.",
"I found myself constantly having to refer ahead to figures and back to details that were important but seemingly presented out of order.",
"Perhaps a reordering of some details could fix this.",
"Recommendation: give the most naturally ordered oral presentation of the work and then order the paper similarly.",
"Finally, the description of the experiments is cursory, and I found myself wondering whether the details omitted were important or not.",
"Including experimental details in a supplementary section could help assuage these fears.",
"The good: What the paper does well is to gather together past work on novelty generation and propose a unified framework in which to evaluate past and future models.",
"This is done by repurposing existing generative model evaluation metrics for the task of evaluating novelty.",
"The experiments are basic, but even the basic experiments go beyond previous work in this area (to this reviewer’s knowledge).",
"Overall I recommend the paper be accepted, but I strongly recommend rewriting some components to make it more digestible.",
"As with other novelty papers, it would be read thoroughly by the interested few, but it is likely to fight an uphill battle against the majority of readers outside the sub-sub-field of novelty generation; for this reason the theory should be made even more intuitive and clear and the experiments and results even more accessible."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 6,
"praise": 5,
"presentation_and_reporting": 8,
"results_and_discussion": 2,
"suggestion_and_solution": 5,
"total": 14
} | 0.357143 | 0 | 0.214286 | 0.428571 | 0.357143 | 0.571429 | 0.142857 | 0.357143 | 14 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.428571 | 2.412789 | 0.015782 |
Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning | Although neural networks are well suited for sequential transfer learning tasks, the catastrophic forgetting problem hinders proper integration of prior knowledge. In this work, we propose a solution to this problem by using a multi-task objective based on the idea of distillation and a mechanism that directly penalizes forgetting at the shared representation layer during the knowledge integration phase of training. We demonstrate our approach on a Twitter domain sentiment analysis task with sequential knowledge transfer from four related tasks. We show that our technique outperforms networks fine-tuned to the target task. Additionally, we show both through empirical evidence and examples that it does not forget useful knowledge from the source task that is forgotten during standard fine-tuning. Surprisingly, we find that first distilling a human made rule based sentiment engine into a recurrent neural network and then integrating the knowledge with the target task data leads to a substantial gain in generalization performance. Our experiments demonstrate the power of multi-source transfer techniques in practical text analytics problems when paired with distillation. In particular, for the SemEval 2016 Task 4 Subtask A (Nakov et al., 2016) dataset we surpass the state of the art established during the competition with a comparatively simple model architecture that is not even competitive when trained on only the labeled task specific data. | This paper proposes a regularization technique for neural network training that relies on having multiple related tasks or datasets in a transfer learning setting. The proposed technique is straightforward to describe and can also leverage external labeling systems perhaps based on logical rules. The paper is clearly written and the experiments seem relatively thorough. Overall this is a nice paper but does not fully address how robust the proposed technique is. For each experiment there seems to be a slightly different application of the proposed technique, or a lot of ensembling and cross validation. I can’t figure out if this is because the proposed technique does not work well in general and thus required a lot of fiddling to get right in experiments, or if this is simply an artifact of ad-hoc experiments to try and get the best performance overall. If more datasets or addressing this issue directly in discussion was able to show this the strengths and limitations of the proposed technique more clearly, this could be a great paper. Overall the proposed method seems nice and possibly useful for other problems. However in the details of logical rule distillation and various experiment settings it seems like there is a lot of running the model many times or selecting a particular way of reusing the models and data that makes me wonder how robust the technique is or whether it requires a lot of trying various approaches, ensembling, or picking the best model from cross validation to show real gains. The authors could help by discussing this explicitly for all experiments in one place rather than listing the various choices / approaches in each experiment. As an example, these sorts of phrases make me very unsure how reliable the method is in practice versus how much the authors had to engineer this regularizer to perform well: “We noticed that equation 8 is actually prone to overfitting away from a good solution on the test set although it often finds a pretty good one early in training. “ The introduction section should first review the definitions of transfer learning vs multi-task learning to make the discussion more clear. It also deems justification why “catastrophic forgetting” is actually a problem. If the final target task is the only thing of interest then forgetting the source task is not an issue and the authors should motivate why forgetting matters in their setting. This paper explores sequential transfer so it’s not obvious why forgetting the source task matters. Section 7 introduces the logical rules engine in a fairly specific context. Rather it would be good state more generally what this system entails to help people figure out how this method would apply to other problems. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRepresentation Stability as a Regularizer for Improved Text Analytics Transfer Learning\nAlthough neural networks are well suited for sequential transfer learning tasks, the catastrophic forgetting problem hinders proper integration of prior knowledge. In this work, we propose a solution to this problem by using a multi-task objective based on the idea of distillation and a mechanism that directly penalizes forgetting at the shared representation layer during the knowledge integration phase of training. We demonstrate our approach on a Twitter domain sentiment analysis task with sequential knowledge transfer from four related tasks. We show that our technique outperforms networks fine-tuned to the target task. Additionally, we show both through empirical evidence and examples that it does not forget useful knowledge from the source task that is forgotten during standard fine-tuning. Surprisingly, we find that first distilling a human made rule based sentiment engine into a recurrent neural network and then integrating the knowledge with the target task data leads to a substantial gain in generalization performance. Our experiments demonstrate the power of multi-source transfer techniques in practical text analytics problems when paired with distillation. In particular, for the SemEval 2016 Task 4 Subtask A (Nakov et al., 2016) dataset we surpass the state of the art established during the competition with a comparatively simple model architecture that is not even competitive when trained on only the labeled task specific data.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a regularization technique for neural network training that relies on having multiple related tasks or datasets in a transfer learning setting. The proposed technique is straightforward to describe and can also leverage external labeling systems perhaps based on logical rules. The paper is clearly written and the experiments seem relatively thorough. Overall this is a nice paper but does not fully address how robust the proposed technique is. For each experiment there seems to be a slightly different application of the proposed technique, or a lot of ensembling and cross validation. I can’t figure out if this is because the proposed technique does not work well in general and thus required a lot of fiddling to get right in experiments, or if this is simply an artifact of ad-hoc experiments to try and get the best performance overall. If more datasets or addressing this issue directly in discussion was able to show this the strengths and limitations of the proposed technique more clearly, this could be a great paper. Overall the proposed method seems nice and possibly useful for other problems. However in the details of logical rule distillation and various experiment settings it seems like there is a lot of running the model many times or selecting a particular way of reusing the models and data that makes me wonder how robust the technique is or whether it requires a lot of trying various approaches, ensembling, or picking the best model from cross validation to show real gains. The authors could help by discussing this explicitly for all experiments in one place rather than listing the various choices / approaches in each experiment. As an example, these sorts of phrases make me very unsure how reliable the method is in practice versus how much the authors had to engineer this regularizer to perform well: “We noticed that equation 8 is actually prone to overfitting away from a good solution on the test set although it often finds a pretty good one early in training. “ The introduction section should first review the definitions of transfer learning vs multi-task learning to make the discussion more clear. It also deems justification why “catastrophic forgetting” is actually a problem. If the final target task is the only thing of interest then forgetting the source task is not an issue and the authors should motivate why forgetting matters in their setting. This paper explores sequential transfer so it’s not obvious why forgetting the source task matters. Section 7 introduces the logical rules engine in a fairly specific context. Rather it would be good state more generally what this system entails to help people figure out how this method would apply to other problems.",
"role": "assistant"
}
]
| [
"This paper proposes a regularization technique for neural network training that relies on having multiple related tasks or datasets in a transfer learning setting.",
"The proposed technique is straightforward to describe and can also leverage external labeling systems perhaps based on logical rules.",
"The paper is clearly written and the experiments seem relatively thorough.",
"Overall this is a nice paper but does not fully address how robust the proposed technique is.",
"For each experiment there seems to be a slightly different application of the proposed technique, or a lot of ensembling and cross validation.",
"I can’t figure out if this is because the proposed technique does not work well in general and thus required a lot of fiddling to get right in experiments, or if this is simply an artifact of ad-hoc experiments to try and get the best performance overall.",
"If more datasets or addressing this issue directly in discussion was able to show this the strengths and limitations of the proposed technique more clearly, this could be a great paper.",
"Overall the proposed method seems nice and possibly useful for other problems.",
"However in the details of logical rule distillation and various experiment settings it seems like there is a lot of running the model many times or selecting a particular way of reusing the models and data that makes me wonder how robust the technique is or whether it requires a lot of trying various approaches, ensembling, or picking the best model from cross validation to show real gains.",
"The authors could help by discussing this explicitly for all experiments in one place rather than listing the various choices / approaches in each experiment.",
"As an example, these sorts of phrases make me very unsure how reliable the method is in practice versus how much the authors had to engineer this regularizer to perform well: “We noticed that equation 8 is actually prone to overfitting away from a good solution on the test set although it often finds a pretty good one early in training.",
"“ The introduction section should first review the definitions of transfer learning vs multi-task learning to make the discussion more clear.",
"It also deems justification why “catastrophic forgetting” is actually a problem.",
"If the final target task is the only thing of interest then forgetting the source task is not an issue and the authors should motivate why forgetting matters in their setting.",
"This paper explores sequential transfer so it’s not obvious why forgetting the source task matters.",
"Section 7 introduces the logical rules engine in a fairly specific context.",
"Rather it would be good state more generally what this system entails to help people figure out how this method would apply to other problems."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 13,
"praise": 4,
"presentation_and_reporting": 4,
"results_and_discussion": 5,
"suggestion_and_solution": 5,
"total": 17
} | 0.235294 | 0.058824 | 0.176471 | 0.764706 | 0.235294 | 0.235294 | 0.294118 | 0.294118 | 17 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.294118 | 2.041599 | 0.252519 |
Extensions and Limitations of the Neural GPU | The Neural GPU is a recent model that can learn algorithms such as multi-digit binary addition and binary multiplication in a way that generalizes to inputs of arbitrary length. We show that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size. The latter requires a memory efficient implementation, as a naive implementation of the Neural GPU is memory intensive. We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before). We have also been able to train the Neural GPU to evaluate long arithmetic expressions with multiple operands that require respecting the precedence order of the operands, although these have succeeded only in their binary representation, and not with perfect accuracy. In addition, we gain insight into the Neural GPU by investigating its failure modes. We find that Neural GPUs that correctly generalize to arbitrarily long numbers still fail to compute the correct answer on highly-symmetric, atypical inputs: for example, a Neural GPU that achieves near-perfect generalization on decimal multiplication of up to 100-digit long numbers can fail on while succeeding at . These failure modes are reminiscent of adversarial examples. | Overall the paper has the feel of a status update by some of the best researchers in the field. The paper is very clear, the observations are interesting, but the remarks are scattered and don*t add up to a quantum of progress in the study of what can be done with the Neural GPU model. Minor remark on the use of the term RNN in Table 1: I found Table 1 confusing because several of the columns are for models that are technically RNNs, and use of RNNs for e.g. translation and word2vec highlight that RNNs can be characterized in terms of the length of their input sequence, the length of their input, and the sizes (per step) of their input, output, and working memories. Basic model question: How are inputs presented (each character 1-hot?) and outputs retrieved when there are e.g. 512 “filters” in the model ? If inputs and outputs are 1-hot encoded, and treated with the same filters as intermediate layers, then the intermediate activation functions should be interpretable as digits, and we should be able to interpret the filters as implementing a reliable e.g. multiplication-with-carry algorithm. Looking at the intermediate values may shed some light on why the usually-working models fail on e.g. the pathological cases identified in Table 3. The preliminary experiment on input alignment is interesting in two ways: the seeds for effective use of an attentional mechanism are there, but also, it suggests that the model is not presently dealing with general expression evaluation the way a correct algorithm should. The remarks in the abstract about improving the memory efficiency of Neural GPU seem overblown -- the paragraph at the top of page 6 describes the improvements as using tf.while_loop instead of unrolling the graph, and using swap_memory to use host memory when GPU memory runs short. These both seem like good practice, but not a remarkable improvement to the efficiency of the model, in fact it would likely slow down training and inference when memory does in fact fit in the GPU. The point about trying many of random seeds to get convergence makes me wonder if the Neural GPU is worth its computational cost at all, when evaluated as means of learning algorithms that are already well understood (e.g. parsing and evaluating S-exprs). Consider spending all of the computational cycles that go into training one of these models (with the multiple seeds) on a traditional search through program space (e.g. sampling lisp programs or something). The notes on the curriculum strategies employed to get the presented results were interesting to read, as an indication of the lengths to which someone might have to go to train this sort of model, but it does leave this reviewer with the impression that despite the stated extensions of the Neural GPU model it remains unclear how useful it might be to practical problems. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExtensions and Limitations of the Neural GPU\nThe Neural GPU is a recent model that can learn algorithms such as multi-digit binary addition and binary multiplication in a way that generalizes to inputs of arbitrary length. We show that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size. The latter requires a memory efficient implementation, as a naive implementation of the Neural GPU is memory intensive. We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before). We have also been able to train the Neural GPU to evaluate long arithmetic expressions with multiple operands that require respecting the precedence order of the operands, although these have succeeded only in their binary representation, and not with perfect accuracy. In addition, we gain insight into the Neural GPU by investigating its failure modes. We find that Neural GPUs that correctly generalize to arbitrarily long numbers still fail to compute the correct answer on highly-symmetric, atypical inputs: for example, a Neural GPU that achieves near-perfect generalization on decimal multiplication of up to 100-digit long numbers can fail on while succeeding at . These failure modes are reminiscent of adversarial examples.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nOverall the paper has the feel of a status update by some of the best researchers in the field. The paper is very clear, the observations are interesting, but the remarks are scattered and don*t add up to a quantum of progress in the study of what can be done with the Neural GPU model. Minor remark on the use of the term RNN in Table 1: I found Table 1 confusing because several of the columns are for models that are technically RNNs, and use of RNNs for e.g. translation and word2vec highlight that RNNs can be characterized in terms of the length of their input sequence, the length of their input, and the sizes (per step) of their input, output, and working memories. Basic model question: How are inputs presented (each character 1-hot?) and outputs retrieved when there are e.g. 512 “filters” in the model ? If inputs and outputs are 1-hot encoded, and treated with the same filters as intermediate layers, then the intermediate activation functions should be interpretable as digits, and we should be able to interpret the filters as implementing a reliable e.g. multiplication-with-carry algorithm. Looking at the intermediate values may shed some light on why the usually-working models fail on e.g. the pathological cases identified in Table 3. The preliminary experiment on input alignment is interesting in two ways: the seeds for effective use of an attentional mechanism are there, but also, it suggests that the model is not presently dealing with general expression evaluation the way a correct algorithm should. The remarks in the abstract about improving the memory efficiency of Neural GPU seem overblown -- the paragraph at the top of page 6 describes the improvements as using tf.while_loop instead of unrolling the graph, and using swap_memory to use host memory when GPU memory runs short. These both seem like good practice, but not a remarkable improvement to the efficiency of the model, in fact it would likely slow down training and inference when memory does in fact fit in the GPU. The point about trying many of random seeds to get convergence makes me wonder if the Neural GPU is worth its computational cost at all, when evaluated as means of learning algorithms that are already well understood (e.g. parsing and evaluating S-exprs). Consider spending all of the computational cycles that go into training one of these models (with the multiple seeds) on a traditional search through program space (e.g. sampling lisp programs or something). The notes on the curriculum strategies employed to get the presented results were interesting to read, as an indication of the lengths to which someone might have to go to train this sort of model, but it does leave this reviewer with the impression that despite the stated extensions of the Neural GPU model it remains unclear how useful it might be to practical problems.",
"role": "assistant"
}
]
| [
"Overall the paper has the feel of a status update by some of the best researchers in the field.",
"The paper is very clear, the observations are interesting, but the remarks are scattered and don*t add up to a quantum of progress in the study of what can be done with the Neural GPU model.",
"Minor remark on the use of the term RNN in Table 1: I found Table 1 confusing because several of the columns are for models that are technically RNNs, and use of RNNs for e.g.",
"translation and word2vec highlight that RNNs can be characterized in terms of the length of their input sequence, the length of their input, and the sizes (per step) of their input, output, and working memories.",
"Basic model question: How are inputs presented (each character 1-hot?)",
"and outputs retrieved when there are e.g.",
"512 “filters” in the model ?",
"If inputs and outputs are 1-hot encoded, and treated with the same filters as intermediate layers, then the intermediate activation functions should be interpretable as digits, and we should be able to interpret the filters as implementing a reliable e.g.",
"multiplication-with-carry algorithm.",
"Looking at the intermediate values may shed some light on why the usually-working models fail on e.g.",
"the pathological cases identified in Table 3.",
"The preliminary experiment on input alignment is interesting in two ways: the seeds for effective use of an attentional mechanism are there, but also, it suggests that the model is not presently dealing with general expression evaluation the way a correct algorithm should.",
"The remarks in the abstract about improving the memory efficiency of Neural GPU seem overblown -- the paragraph at the top of page 6 describes the improvements as using tf.while_loop instead of unrolling the graph, and using swap_memory to use host memory when GPU memory runs short.",
"These both seem like good practice, but not a remarkable improvement to the efficiency of the model, in fact it would likely slow down training and inference when memory does in fact fit in the GPU.",
"The point about trying many of random seeds to get convergence makes me wonder if the Neural GPU is worth its computational cost at all, when evaluated as means of learning algorithms that are already well understood (e.g.",
"parsing and evaluating S-exprs).",
"Consider spending all of the computational cycles that go into training one of these models (with the multiple seeds) on a traditional search through program space (e.g.",
"sampling lisp programs or something).",
"The notes on the curriculum strategies employed to get the presented results were interesting to read, as an indication of the lengths to which someone might have to go to train this sort of model, but it does leave this reviewer with the impression that despite the stated extensions of the Neural GPU model it remains unclear how useful it might be to practical problems."
] | {
"criticism": 5,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 15,
"praise": 5,
"presentation_and_reporting": 5,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 19
} | 0.263158 | 0.105263 | 0.157895 | 0.789474 | 0.263158 | 0.263158 | 0.157895 | 0.157895 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.157895 | 1.589727 | 0.568167 |
Semi-Supervised Detection of Extreme Weather Events in Large Climate Datasets | The detection and identification of extreme weather events in large scale climate simulations is an important problem for risk management, informing governmental policy decisions and advancing our basic understanding of the climate system. Recent work has shown that fully supervised convolutional neural networks (CNNs) can yield acceptable accuracy for classifying well-known types of extreme weather events when large amounts of labeled data are available. However, there are many different types of spatially localized climate patterns of interest (including hurricanes, extra-tropical cyclones, weather fronts, blocking events, etc.) found in simulation data for which labeled data is not available at large scale for all simulations of interest. We present a multichannel spatiotemporal encoder-decoder CNN architecture for semi-supervised bounding box prediction and exploratory data analysis. This architecture is designed to fully model multi-channel simulation data, temporal dynamics and unlabelled data within a reconstruction and prediction framework so as to improve the detection of a wide range of extreme weather events. Our architecture can be viewed as a 3D convolutional autoencoder with an additional modified one-pass bounding box regression loss. We demonstrate that our approach is able to leverage temporal information and unlabelled data to improve localization of extreme weather events. Further, we explore the representations learned by our model in order to better understand this important data, and facilitate further work in understanding and mitigating the effects of climate change. | This work presents a novel 3D CNN architecture for climate event detection that combines an unsupervised auto-encoder reconstruction loss with YOLO like bounding box prediction. The approach is trained and evaluated on a large-scale, simulated climate dataset labeled by a costly heuristic approach called TECA. For the most part, the paper is nicely written (minor comments below) and addresses an important and well motivated problem. The authors provide sufficient model details to allow reproduction (although public code would be preferred). I find the experiments a bit unconvincing (see below) but appreciate the attention to model capacity (via number of parameter) when comparing the 2D and 3D model variants. I am concerned that the evaluation may be insufficient to assess the effectiveness of this method. An IoU threshold of 0.1 allows for many rather poor detections to count as true positives. If the goal of this work is to count the number of such instances this is defensible, but for localization this seems overly loose.Furthermore, the 3D CNN architecture (which is one of the the core novelties of this work compared to past efforts) does not seem capable of producing variable sized boxes (as noted in the last paragraph of page 7), which I imagine results in poorer performance at higher IoU thresholds (as many of the weather events appear small). The experiments also feel inconclusive about the effect of temporal modeling and semi-supervision. The temporal component does not seem to matter in the supervised settings (2D 51.45 mAP - 3D 51.00 mAP) but improves somewhat in the semi-supervised case (2D 51.11 mAP - 3D 52.92 mAP). Whereas the additional unlabeled data seems to hurt in the 2D case but improve results for the 3D model. Could the authors provide confidence intervals for these numbers? I would like to see further discussion of these trends especially with respect to the effect of the loss weights (alpha, beta, and gamma). I also note that it is not clear if both the 2D and 3D models were trained for equivalent time periods (seems like no from last paragraph of page 7). Could a plot of training and validation accuracy for each model be presented for comparison? Finally, is there any baseline approach the authors could report or compare too? Without one, it is difficult to evaluate the performance of the approach with respect to the difficulty of the problem. Preliminary Rating: I think this is an interesting paper that is well motivated but feel the experiments as presented do not seem adequate to support any conclusive trends. I would like to see the mAP trends across a wider range of IoU values and further discussion of training procedure, loss weight settings, and reasons for lack of bounding box variability in the 3D model (as stated above). Clarification: In the paper you say *While climate models are run on a 3D grid, with the vertical dimension corresponding to 30 levels; we only consider surface quantities (i.e. 2D data) in this study.* Could you elaborate on what the surface quantities correspond to? Is it the highest cloud level? Minor notes: Please provide years for Prabhat et al. references rather than a and b. Footnote in 4.2 could be inline text with similar space. 4.3 second paragraph the word table is not capitalized like elsewhere. 4.3 4th paragraph the word section is not capitalized like elsewhere. Edit: I appreciate the authors responding to my questions but still feel the relatively poor localization performance at stricter IoU thresholds fails to justify the complexity of the approach. I encourage the authors to continue pursuing this line of research. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemi-Supervised Detection of Extreme Weather Events in Large Climate Datasets\nThe detection and identification of extreme weather events in large scale climate simulations is an important problem for risk management, informing governmental policy decisions and advancing our basic understanding of the climate system. Recent work has shown that fully supervised convolutional neural networks (CNNs) can yield acceptable accuracy for classifying well-known types of extreme weather events when large amounts of labeled data are available. However, there are many different types of spatially localized climate patterns of interest (including hurricanes, extra-tropical cyclones, weather fronts, blocking events, etc.) found in simulation data for which labeled data is not available at large scale for all simulations of interest. We present a multichannel spatiotemporal encoder-decoder CNN architecture for semi-supervised bounding box prediction and exploratory data analysis. This architecture is designed to fully model multi-channel simulation data, temporal dynamics and unlabelled data within a reconstruction and prediction framework so as to improve the detection of a wide range of extreme weather events. Our architecture can be viewed as a 3D convolutional autoencoder with an additional modified one-pass bounding box regression loss. We demonstrate that our approach is able to leverage temporal information and unlabelled data to improve localization of extreme weather events. Further, we explore the representations learned by our model in order to better understand this important data, and facilitate further work in understanding and mitigating the effects of climate change.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work presents a novel 3D CNN architecture for climate event detection that combines an unsupervised auto-encoder reconstruction loss with YOLO like bounding box prediction. The approach is trained and evaluated on a large-scale, simulated climate dataset labeled by a costly heuristic approach called TECA. For the most part, the paper is nicely written (minor comments below) and addresses an important and well motivated problem. The authors provide sufficient model details to allow reproduction (although public code would be preferred). I find the experiments a bit unconvincing (see below) but appreciate the attention to model capacity (via number of parameter) when comparing the 2D and 3D model variants. I am concerned that the evaluation may be insufficient to assess the effectiveness of this method. An IoU threshold of 0.1 allows for many rather poor detections to count as true positives. If the goal of this work is to count the number of such instances this is defensible, but for localization this seems overly loose.Furthermore, the 3D CNN architecture (which is one of the the core novelties of this work compared to past efforts) does not seem capable of producing variable sized boxes (as noted in the last paragraph of page 7), which I imagine results in poorer performance at higher IoU thresholds (as many of the weather events appear small). The experiments also feel inconclusive about the effect of temporal modeling and semi-supervision. The temporal component does not seem to matter in the supervised settings (2D 51.45 mAP - 3D 51.00 mAP) but improves somewhat in the semi-supervised case (2D 51.11 mAP - 3D 52.92 mAP). Whereas the additional unlabeled data seems to hurt in the 2D case but improve results for the 3D model. Could the authors provide confidence intervals for these numbers? I would like to see further discussion of these trends especially with respect to the effect of the loss weights (alpha, beta, and gamma). I also note that it is not clear if both the 2D and 3D models were trained for equivalent time periods (seems like no from last paragraph of page 7). Could a plot of training and validation accuracy for each model be presented for comparison? Finally, is there any baseline approach the authors could report or compare too? Without one, it is difficult to evaluate the performance of the approach with respect to the difficulty of the problem. Preliminary Rating: I think this is an interesting paper that is well motivated but feel the experiments as presented do not seem adequate to support any conclusive trends. I would like to see the mAP trends across a wider range of IoU values and further discussion of training procedure, loss weight settings, and reasons for lack of bounding box variability in the 3D model (as stated above). Clarification: In the paper you say *While climate models are run on a 3D grid, with the vertical dimension corresponding to 30 levels; we only consider surface quantities (i.e. 2D data) in this study.* Could you elaborate on what the surface quantities correspond to? Is it the highest cloud level? Minor notes: Please provide years for Prabhat et al. references rather than a and b. Footnote in 4.2 could be inline text with similar space. 4.3 second paragraph the word table is not capitalized like elsewhere. 4.3 4th paragraph the word section is not capitalized like elsewhere. Edit: I appreciate the authors responding to my questions but still feel the relatively poor localization performance at stricter IoU thresholds fails to justify the complexity of the approach. I encourage the authors to continue pursuing this line of research.",
"role": "assistant"
}
]
| [
"This work presents a novel 3D CNN architecture for climate event detection that combines an unsupervised auto-encoder reconstruction loss with YOLO like bounding box prediction.",
"The approach is trained and evaluated on a large-scale, simulated climate dataset labeled by a costly heuristic approach called TECA.",
"For the most part, the paper is nicely written (minor comments below) and addresses an important and well motivated problem.",
"The authors provide sufficient model details to allow reproduction (although public code would be preferred).",
"I find the experiments a bit unconvincing (see below) but appreciate the attention to model capacity (via number of parameter) when comparing the 2D and 3D model variants.",
"I am concerned that the evaluation may be insufficient to assess the effectiveness of this method.",
"An IoU threshold of 0.1 allows for many rather poor detections to count as true positives.",
"If the goal of this work is to count the number of such instances this is defensible, but for localization this seems overly loose.Furthermore, the 3D CNN architecture (which is one of the the core novelties of this work compared to past efforts) does not seem capable of producing variable sized boxes (as noted in the last paragraph of page 7), which I imagine results in poorer performance at higher IoU thresholds (as many of the weather events appear small).",
"The experiments also feel inconclusive about the effect of temporal modeling and semi-supervision.",
"The temporal component does not seem to matter in the supervised settings (2D 51.45 mAP - 3D 51.00 mAP) but improves somewhat in the semi-supervised case (2D 51.11 mAP - 3D 52.92 mAP).",
"Whereas the additional unlabeled data seems to hurt in the 2D case but improve results for the 3D model.",
"Could the authors provide confidence intervals for these numbers?",
"I would like to see further discussion of these trends especially with respect to the effect of the loss weights (alpha, beta, and gamma).",
"I also note that it is not clear if both the 2D and 3D models were trained for equivalent time periods (seems like no from last paragraph of page 7).",
"Could a plot of training and validation accuracy for each model be presented for comparison?",
"Finally, is there any baseline approach the authors could report or compare too?",
"Without one, it is difficult to evaluate the performance of the approach with respect to the difficulty of the problem.",
"Preliminary Rating: I think this is an interesting paper that is well motivated but feel the experiments as presented do not seem adequate to support any conclusive trends.",
"I would like to see the mAP trends across a wider range of IoU values and further discussion of training procedure, loss weight settings, and reasons for lack of bounding box variability in the 3D model (as stated above).",
"Clarification: In the paper you say *While climate models are run on a 3D grid, with the vertical dimension corresponding to 30 levels; we only consider surface quantities (i.e.",
"2D data) in this study.",
"* Could you elaborate on what the surface quantities correspond to?",
"Is it the highest cloud level?",
"Minor notes: Please provide years for Prabhat et al.",
"references rather than a and b. Footnote in 4.2 could be inline text with similar space.",
"4.3 second paragraph the word table is not capitalized like elsewhere.",
"4.3 4th paragraph the word section is not capitalized like elsewhere.",
"Edit: I appreciate the authors responding to my questions but still feel the relatively poor localization performance at stricter IoU thresholds fails to justify the complexity of the approach.",
"I encourage the authors to continue pursuing this line of research."
] | {
"criticism": 8,
"example": 4,
"importance_and_relevance": 5,
"materials_and_methods": 22,
"praise": 4,
"presentation_and_reporting": 5,
"results_and_discussion": 8,
"suggestion_and_solution": 8,
"total": 29
} | 0.275862 | 0.137931 | 0.172414 | 0.758621 | 0.137931 | 0.172414 | 0.275862 | 0.275862 | 29 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.206897 | -1.833405 | 4.040302 |
Sample Importance in Training Deep Neural Networks | The contribution of each sample during model training varies across training iterations and the model*s parameters. We define the concept of sample importance as the change in parameters induced by a sample. In this paper, we explored the sample importance in training deep neural networks using stochastic gradient descent. We found that *easy* samples -- samples that are correctly and confidently classified at the end of the training -- shape parameters closer to the output, while the *hard* samples impact parameters closer to the input to the network. Further, *easy* samples are relevant in the early training stages, and *hard* in the late training stage. Further, we show that constructing batches which contain samples of comparable difficulties tends to be a poor strategy compared to maintaining a mix of both hard and easy samples in all of the batches. Interestingly, this contradicts some of the results on curriculum learning which suggest that ordering training examples in terms of difficulty can lead to better performance. | The paper proposes a new criterion (sample importance) to study the impact of samples during the training of deep neural networks. This criterion is not clearly defined (the term phi^t_{i,j} is never defined, only phi^t_i is defined; Despite the unclear definition, it is understood that sample importance is the squared l2 norm of the gradient for a sample i and at time t strangely scaled by the squared learning rate (the learning rate should have nothing to do with the importance of a sample in this context). The paper presents experiments on the well known MNIST and CIFAR datasets with correspondingly appropriate network architectures and choice of hyper-parameters and initialisations. The size of the hidden layers is a bit small for Mnist and very small for CIFAR (this could explain the very poor performance in figure 6: 50% error on CIFAR) The study of the evolution of sample importance during training depending on layers seems to lead to trivial conclusions - “the overall sample importance is different under different epochs” => yes the norm of the gradient is expected to vary - “Output layer always has the largest average sample importance per parameter, and its contribution reaches the maximum in the early training stage and then drops” => 1. yes since the gradient flows backwards, the gradient is expected to be stronger for the output layer and it is expected to become more diffuse as it propagates to lower layers which are not stable. As learning progresses one would expect the output layer to have progressively smaller gradients. 2. the norm of the gradient depends on the scaling of the variables The question of Figure 4 is absurd “Is Sample Importance the same as Negative log-likelihood of a sample?”. Of course not. The results are very bad on CIFAR which discredits the applicability of those results. On Mnist performance is not readable (figure 7): Error rate should only be presented between 0 and 10 or 20% Despite these important issues (there are others), the paper manages to raises some interesting things: the so-called easy samples and hard samples do seem to correspond (although the study is very preliminary in this regard) to what would intuitively be considered easy (the most representative/canonical samples) and hard (edge cases) samples. Also the experiments are very well presented. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSample Importance in Training Deep Neural Networks\nThe contribution of each sample during model training varies across training iterations and the model*s parameters. We define the concept of sample importance as the change in parameters induced by a sample. In this paper, we explored the sample importance in training deep neural networks using stochastic gradient descent. We found that *easy* samples -- samples that are correctly and confidently classified at the end of the training -- shape parameters closer to the output, while the *hard* samples impact parameters closer to the input to the network. Further, *easy* samples are relevant in the early training stages, and *hard* in the late training stage. Further, we show that constructing batches which contain samples of comparable difficulties tends to be a poor strategy compared to maintaining a mix of both hard and easy samples in all of the batches. Interestingly, this contradicts some of the results on curriculum learning which suggest that ordering training examples in terms of difficulty can lead to better performance.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper proposes a new criterion (sample importance) to study the impact of samples during the training of deep neural networks. This criterion is not clearly defined (the term phi^t_{i,j} is never defined, only phi^t_i is defined; Despite the unclear definition, it is understood that sample importance is the squared l2 norm of the gradient for a sample i and at time t strangely scaled by the squared learning rate (the learning rate should have nothing to do with the importance of a sample in this context). The paper presents experiments on the well known MNIST and CIFAR datasets with correspondingly appropriate network architectures and choice of hyper-parameters and initialisations. The size of the hidden layers is a bit small for Mnist and very small for CIFAR (this could explain the very poor performance in figure 6: 50% error on CIFAR) The study of the evolution of sample importance during training depending on layers seems to lead to trivial conclusions - “the overall sample importance is different under different epochs” => yes the norm of the gradient is expected to vary - “Output layer always has the largest average sample importance per parameter, and its contribution reaches the maximum in the early training stage and then drops” => 1. yes since the gradient flows backwards, the gradient is expected to be stronger for the output layer and it is expected to become more diffuse as it propagates to lower layers which are not stable. As learning progresses one would expect the output layer to have progressively smaller gradients. 2. the norm of the gradient depends on the scaling of the variables The question of Figure 4 is absurd “Is Sample Importance the same as Negative log-likelihood of a sample?”. Of course not. The results are very bad on CIFAR which discredits the applicability of those results. On Mnist performance is not readable (figure 7): Error rate should only be presented between 0 and 10 or 20% Despite these important issues (there are others), the paper manages to raises some interesting things: the so-called easy samples and hard samples do seem to correspond (although the study is very preliminary in this regard) to what would intuitively be considered easy (the most representative/canonical samples) and hard (edge cases) samples. Also the experiments are very well presented.",
"role": "assistant"
}
]
| [
"The paper proposes a new criterion (sample importance) to study the impact of samples during the training of deep neural networks.",
"This criterion is not clearly defined (the term phi^t_{i,j} is never defined, only phi^t_i is defined; Despite the unclear definition, it is understood that sample importance is the squared l2 norm of the gradient for a sample i and at time t strangely scaled by the squared learning rate (the learning rate should have nothing to do with the importance of a sample in this context).",
"The paper presents experiments on the well known MNIST and CIFAR datasets with correspondingly appropriate network architectures and choice of hyper-parameters and initialisations.",
"The size of the hidden layers is a bit small for Mnist and very small for CIFAR (this could explain the very poor performance in figure 6: 50% error on CIFAR) The study of the evolution of sample importance during training depending on layers seems to lead to trivial conclusions - “the overall sample importance is different under different epochs” => yes the norm of the gradient is expected to vary - “Output layer always has the largest average sample importance per parameter, and its contribution reaches the maximum in the early training stage and then drops” => 1. yes since the gradient flows backwards, the gradient is expected to be stronger for the output layer and it is expected to become more diffuse as it propagates to lower layers which are not stable.",
"As learning progresses one would expect the output layer to have progressively smaller gradients.",
"2. the norm of the gradient depends on the scaling of the variables The question of Figure 4 is absurd “Is Sample Importance the same as Negative log-likelihood of a sample?”.",
"Of course not.",
"The results are very bad on CIFAR which discredits the applicability of those results.",
"On Mnist performance is not readable (figure 7): Error rate should only be presented between 0 and 10 or 20% Despite these important issues (there are others), the paper manages to raises some interesting things: the so-called easy samples and hard samples do seem to correspond (although the study is very preliminary in this regard) to what would intuitively be considered easy (the most representative/canonical samples) and hard (edge cases) samples.",
"Also the experiments are very well presented."
] | {
"criticism": 3,
"example": 3,
"importance_and_relevance": 2,
"materials_and_methods": 8,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 5,
"suggestion_and_solution": 1,
"total": 10
} | 0.3 | 0.3 | 0.2 | 0.8 | 0.3 | 0.3 | 0.5 | 0.1 | 10 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.8 | 2.657958 | 0.142042 |
Sample Importance in Training Deep Neural Networks | The contribution of each sample during model training varies across training iterations and the model*s parameters. We define the concept of sample importance as the change in parameters induced by a sample. In this paper, we explored the sample importance in training deep neural networks using stochastic gradient descent. We found that *easy* samples -- samples that are correctly and confidently classified at the end of the training -- shape parameters closer to the output, while the *hard* samples impact parameters closer to the input to the network. Further, *easy* samples are relevant in the early training stages, and *hard* in the late training stage. Further, we show that constructing batches which contain samples of comparable difficulties tends to be a poor strategy compared to maintaining a mix of both hard and easy samples in all of the batches. Interestingly, this contradicts some of the results on curriculum learning which suggest that ordering training examples in terms of difficulty can lead to better performance. | This paper examines the so called *Sample Importance* of each sample of a training data set, and its effect to the overall learning process. The paper shows empirical results that shows different training cases induces bigger gradients at different stages of learning and different layers. The paper shows some interesting results contrary to the common curriculum learning ideas of using easy training samples first. However, it is unclear how one should define *Easy* training cases. In addition, the experiments demonstrating ordering either NLL or SI is worse than mixed or random batch construction to be insightful. Possible Improvements: It would be nice to factor out the magnitudes of the gradients to the contribution of *sample importance*. Higher gradient (as a function of a particular weight vector) can be affected weight/initialization, thereby introducing noise to the model. It would also be interesting if improvements based on *Sample importance* could be made to the batch selection algorithm to beat the baseline of random batch selection. Overall this paper is a good paper with various experiments examining how various samples in SGD influences the various aspect of training. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSample Importance in Training Deep Neural Networks\nThe contribution of each sample during model training varies across training iterations and the model*s parameters. We define the concept of sample importance as the change in parameters induced by a sample. In this paper, we explored the sample importance in training deep neural networks using stochastic gradient descent. We found that *easy* samples -- samples that are correctly and confidently classified at the end of the training -- shape parameters closer to the output, while the *hard* samples impact parameters closer to the input to the network. Further, *easy* samples are relevant in the early training stages, and *hard* in the late training stage. Further, we show that constructing batches which contain samples of comparable difficulties tends to be a poor strategy compared to maintaining a mix of both hard and easy samples in all of the batches. Interestingly, this contradicts some of the results on curriculum learning which suggest that ordering training examples in terms of difficulty can lead to better performance.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper examines the so called *Sample Importance* of each sample of a training data set, and its effect to the overall learning process. The paper shows empirical results that shows different training cases induces bigger gradients at different stages of learning and different layers. The paper shows some interesting results contrary to the common curriculum learning ideas of using easy training samples first. However, it is unclear how one should define *Easy* training cases. In addition, the experiments demonstrating ordering either NLL or SI is worse than mixed or random batch construction to be insightful. Possible Improvements: It would be nice to factor out the magnitudes of the gradients to the contribution of *sample importance*. Higher gradient (as a function of a particular weight vector) can be affected weight/initialization, thereby introducing noise to the model. It would also be interesting if improvements based on *Sample importance* could be made to the batch selection algorithm to beat the baseline of random batch selection. Overall this paper is a good paper with various experiments examining how various samples in SGD influences the various aspect of training.",
"role": "assistant"
}
]
| [
"This paper examines the so called *Sample Importance* of each sample of a training data set, and its effect to the overall learning process.",
"The paper shows empirical results that shows different training cases induces bigger gradients at different stages of learning and different layers.",
"The paper shows some interesting results contrary to the common curriculum learning ideas of using easy training samples first.",
"However, it is unclear how one should define *Easy* training cases.",
"In addition, the experiments demonstrating ordering either NLL or SI is worse than mixed or random batch construction to be insightful.",
"Possible Improvements: It would be nice to factor out the magnitudes of the gradients to the contribution of *sample importance*.",
"Higher gradient (as a function of a particular weight vector) can be affected weight/initialization, thereby introducing noise to the model.",
"It would also be interesting if improvements based on *Sample importance* could be made to the batch selection algorithm to beat the baseline of random batch selection.",
"Overall this paper is a good paper with various experiments examining how various samples in SGD influences the various aspect of training."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 8,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 2,
"total": 9
} | 0.222222 | 0 | 0.111111 | 0.888889 | 0.222222 | 0.222222 | 0.444444 | 0.222222 | 9 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.080814 | 0.252519 |
Learning Disentangled Representations in Deep Generative Models | Deep generative models provide a powerful and flexible means to learn complex distributions over data by incorporating neural networks into latent-variable models. Variational approaches to training such models introduce a probabilistic encoder that casts data, typically unsupervised, into an entangled and unstructured representation space. While unsupervised learning is often desirable, sometimes even necessary, when we lack prior knowledge about what to represent, being able to incorporate domain knowledge in characterising certain aspects of variation in the data can often help learn better disentangled representations. Here, we introduce a new formulation of semi-supervised learning in variational autoencoders that allows precisely this. It permits flexible specification of probabilistic encoders as directed graphical models via a stochastic computation graph, containing both continuous and discrete latent variables, with conditional distributions parametrised by neural networks. We demonstrate how the provision of structure, along with a few labelled examples indicating plausible values for some components of the latent space, can help quickly learn disentangled representations. We then evaluate its ability to do so, both qualitatively by exploring its generative capacity, and quantitatively by using the disentangled representation to perform classification, on a variety of models and datasets. | This paper investigates deep generative models with multiple stochastic nodes and gives them meaning by semi-supervision. From a methodological point of view, there is nothing fundamentally novel (it is very similar to the semi-supervised work of Kingma et al; although this work has sometimes more than two latent nodes, it is not a complex extension). There is a fairly classical auxiliary variable trick used to make sure the inference network for y is trained over all data points (by supposing y is in fact is a latent variable with an observation ilde y; the observation is y if y is observed, or uninformative for unobserved y). Alternatively, one can separate the inference used to learn the generative model (which throws out inference over y if it is observed), from an inference used to *exercise* the model (approximate the complex p(y|x) in the model by a simpler q(y|x) - effectively inferring the target p(y|x) for the data where only x is collected). Results are strong, although on simple datasets. Overall this is a well written, interesting paper, but lacking in terms of methodological advances. Minor: - I feel the title is a bit too general for the content of the paper. I personally don*t agree with the strong contrast made between deep generative models and graphical models (deep generative models are graphical models, but they are more typically learned and un-interpretable than classical graphical models; and having multiple stochastic variables is not exclusive to graphical models, see DRAW, Deep Kalman Filter, Recurrent VAE, etc.). The word *structure* is a bit problematic; here, the paper seems more concerned with disentangling and semanticizing the latent representation of a generative model by supervision. It is debatable whether the models themselves have structure. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Disentangled Representations in Deep Generative Models\nDeep generative models provide a powerful and flexible means to learn complex distributions over data by incorporating neural networks into latent-variable models. Variational approaches to training such models introduce a probabilistic encoder that casts data, typically unsupervised, into an entangled and unstructured representation space. While unsupervised learning is often desirable, sometimes even necessary, when we lack prior knowledge about what to represent, being able to incorporate domain knowledge in characterising certain aspects of variation in the data can often help learn better disentangled representations. Here, we introduce a new formulation of semi-supervised learning in variational autoencoders that allows precisely this. It permits flexible specification of probabilistic encoders as directed graphical models via a stochastic computation graph, containing both continuous and discrete latent variables, with conditional distributions parametrised by neural networks. We demonstrate how the provision of structure, along with a few labelled examples indicating plausible values for some components of the latent space, can help quickly learn disentangled representations. We then evaluate its ability to do so, both qualitatively by exploring its generative capacity, and quantitatively by using the disentangled representation to perform classification, on a variety of models and datasets.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper investigates deep generative models with multiple stochastic nodes and gives them meaning by semi-supervision. From a methodological point of view, there is nothing fundamentally novel (it is very similar to the semi-supervised work of Kingma et al; although this work has sometimes more than two latent nodes, it is not a complex extension). There is a fairly classical auxiliary variable trick used to make sure the inference network for y is trained over all data points (by supposing y is in fact is a latent variable with an observation \tilde y; the observation is y if y is observed, or uninformative for unobserved y). Alternatively, one can separate the inference used to learn the generative model (which throws out inference over y if it is observed), from an inference used to *exercise* the model (approximate the complex p(y|x) in the model by a simpler q(y|x) - effectively inferring the target p(y|x) for the data where only x is collected). Results are strong, although on simple datasets. Overall this is a well written, interesting paper, but lacking in terms of methodological advances. Minor: - I feel the title is a bit too general for the content of the paper. I personally don*t agree with the strong contrast made between deep generative models and graphical models (deep generative models are graphical models, but they are more typically learned and un-interpretable than classical graphical models; and having multiple stochastic variables is not exclusive to graphical models, see DRAW, Deep Kalman Filter, Recurrent VAE, etc.). The word *structure* is a bit problematic; here, the paper seems more concerned with disentangling and semanticizing the latent representation of a generative model by supervision. It is debatable whether the models themselves have structure.",
"role": "assistant"
}
]
| [
"This paper investigates deep generative models with multiple stochastic nodes and gives them meaning by semi-supervision.",
"From a methodological point of view, there is nothing fundamentally novel (it is very similar to the semi-supervised work of Kingma et al; although this work has sometimes more than two latent nodes, it is not a complex extension).",
"There is a fairly classical auxiliary variable trick used to make sure the inference network for y is trained over all data points (by supposing y is in fact is a latent variable with an observation \tilde y; the observation is y if y is observed, or uninformative for unobserved y).",
"Alternatively, one can separate the inference used to learn the generative model (which throws out inference over y if it is observed), from an inference used to *exercise* the model (approximate the complex p(y|x) in the model by a simpler q(y|x) - effectively inferring the target p(y|x) for the data where only x is collected).",
"Results are strong, although on simple datasets.",
"Overall this is a well written, interesting paper, but lacking in terms of methodological advances.",
"Minor: - I feel the title is a bit too general for the content of the paper.",
"I personally don*t agree with the strong contrast made between deep generative models and graphical models (deep generative models are graphical models, but they are more typically learned and un-interpretable than classical graphical models; and having multiple stochastic variables is not exclusive to graphical models, see DRAW, Deep Kalman Filter, Recurrent VAE, etc.).",
"The word *structure* is a bit problematic; here, the paper seems more concerned with disentangling and semanticizing the latent representation of a generative model by supervision.",
"It is debatable whether the models themselves have structure."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 10
} | 0.5 | 0 | 0.2 | 0.9 | 0.2 | 0.3 | 0.1 | 0.1 | 10 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.3 | 2.157958 | 0.142042 |
Low-rank passthrough neural networks | Deep learning consists in training neural networks to perform computations that sequentially unfold in many steps over a time dimension or an intrinsic depth dimension. For large depths, this is usually accomplished by specialized network architectures that are designed to mitigate the vanishing gradient problem, e.g. LSTMs, GRUs, Highway Networks and Deep Residual Networks, which are based on a single structural principle: the state passthrough. We observe that these *Passthrough Networks* architectures enable the decoupling of the network state size from the number of parameters of the network, a possibility that is exploited in some recent works but not thoroughly explored. In this work we propose simple, yet effective, low-rank and low-rank plus diagonal matrix parametrizations for Passthrough Networks which exploit this decoupling property, reducing the data complexity and memory requirements of the network while preserving its memory capacity. We present competitive experimental results on several tasks, including a near state of the art result on sequential randomly-permuted MNIST classification, a hard task on natural data. | The authors study the use of low-rank approximation to the matrix-multiply in RNNs. This reduces the number of parameters by a large factor, and with a diagonal addition (called low-rank plus diagonal) it is shown to work as well as a fully-parametrized network on a number of tasks. The paper is solid, the only weakness being some claims about conceptual unification (e.g., the first line of the conclusion -- *We presented a framework that unifies the description various types of recurrent and feed-forward neural networks as passthrough neural networks.* -- claiming this framework as a contribution of this paper is untrue, the general framework is well known in the community and RNNs have been presented in this way before.) Aside from the above small point, the true contribution is in making low-rank RNNs work, the results are generally as good as fully-parametrized networks. They are hardly better though, which makes it unclear why low-rank networks should be used. The contribution is thus not very strong in terms of results, but even achieving the same results with fewer parameters is not easy and the studies were well-executed and explained. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLow-rank passthrough neural networks\nDeep learning consists in training neural networks to perform computations that sequentially unfold in many steps over a time dimension or an intrinsic depth dimension. For large depths, this is usually accomplished by specialized network architectures that are designed to mitigate the vanishing gradient problem, e.g. LSTMs, GRUs, Highway Networks and Deep Residual Networks, which are based on a single structural principle: the state passthrough. We observe that these *Passthrough Networks* architectures enable the decoupling of the network state size from the number of parameters of the network, a possibility that is exploited in some recent works but not thoroughly explored. In this work we propose simple, yet effective, low-rank and low-rank plus diagonal matrix parametrizations for Passthrough Networks which exploit this decoupling property, reducing the data complexity and memory requirements of the network while preserving its memory capacity. We present competitive experimental results on several tasks, including a near state of the art result on sequential randomly-permuted MNIST classification, a hard task on natural data.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors study the use of low-rank approximation to the matrix-multiply in RNNs. This reduces the number of parameters by a large factor, and with a diagonal addition (called low-rank plus diagonal) it is shown to work as well as a fully-parametrized network on a number of tasks. The paper is solid, the only weakness being some claims about conceptual unification (e.g., the first line of the conclusion -- *We presented a framework that unifies the description various types of recurrent and feed-forward neural networks as passthrough neural networks.* -- claiming this framework as a contribution of this paper is untrue, the general framework is well known in the community and RNNs have been presented in this way before.) Aside from the above small point, the true contribution is in making low-rank RNNs work, the results are generally as good as fully-parametrized networks. They are hardly better though, which makes it unclear why low-rank networks should be used. The contribution is thus not very strong in terms of results, but even achieving the same results with fewer parameters is not easy and the studies were well-executed and explained.",
"role": "assistant"
}
]
| [
"The authors study the use of low-rank approximation to the matrix-multiply in RNNs.",
"This reduces the number of parameters by a large factor, and with a diagonal addition (called low-rank plus diagonal) it is shown to work as well as a fully-parametrized network on a number of tasks.",
"The paper is solid, the only weakness being some claims about conceptual unification (e.g., the first line of the conclusion -- *We presented a framework that unifies the description various types of recurrent and feed-forward neural networks as passthrough neural networks.",
"* -- claiming this framework as a contribution of this paper is untrue, the general framework is well known in the community and RNNs have been presented in this way before.)",
"Aside from the above small point, the true contribution is in making low-rank RNNs work, the results are generally as good as fully-parametrized networks.",
"They are hardly better though, which makes it unclear why low-rank networks should be used.",
"The contribution is thus not very strong in terms of results, but even achieving the same results with fewer parameters is not easy and the studies were well-executed and explained."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 0,
"total": 7
} | 0.571429 | 0 | 0.285714 | 1 | 0.285714 | 0.142857 | 0.428571 | 0 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.714286 | 2.146118 | 0.568167 |
Inference and Introspection in Deep Generative Models of Sparse Data | Deep generative models such as deep latent Gaussian models (DLGMs) are powerful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to sparse, high-dimensional integer data such as word counts or product ratings. One reason is that the standard training procedures find poor local optima when applied to such data. We propose two techniques that alleviate this problem, significantly improving our ability to fit DLGMs to sparse, high-dimensional data. Having fit these models, we are faced with another challenge: how to use and interpret the representation that we have learned? To that end, we propose a method that extracts distributed representations of features via a simple linearization of the model. | The paper claims improved inference for density estimation of sparse data (here text documents) using deep generative Gaussian models (variational auto-encoders), and a method for deriving word embeddings from the model*s generative parameters that allows for a degree of interpretability similar to that of Bayesian generative topic models. To discuss the contributions I will quickly review the generative story in the paper: first a K-dimensional latent representation is sampled from a multivariate Gaussian, then an MLP (with parameters heta) predicts unnormalised potentials over a vocabulary of V words, the potentials are exponentiated and normalised to make the parameters of a multinomial from where word observations are repeatedly sampled to make a document. Here intractable inference is replaced by the VAE formulation where an inference network (with parameters phi) independently predicts for each document the mean and variance of a normal distribution (amenable to reparameterised gradient computation). The first, and rather trivial, contribution is to use tf-idf features to inject first order statistics (a global information) into local observations. The authors claim that this is particularly helpful in the case of sparse data such as text. The second contribution is more interesting. In optimising generative parameters ( heta) and variational parameters (phi), the authors turn to a treatment which is reminiscent of the original SVI procedure. That is, they see the variational parameters phi as *global* variational parameters, and the predicted mean mu(x) and covariance Sigma(x) of each observation x are treated as *local* variational parameters. In the original VAE, local parameters are not directly optimised, instead they are indirectly optimised via optimisation of the global parameters utilised in their prediction (shared MLP parameters). Here, local parameters are optimised holding generative parameters fixed (line 3 of Algorithm 1). The optimised local parameters are then used in the gradient step of the generative parameters (line 4 of Algorithm 1). Finally, global variational parameters are also updated (line 5). Whereas indeed other authors have proposed to optimise local parameters, I think that deriving this procedure from the more familiar SVI makes the contribution less of a trick and easier to relate to. Some things aren*t entirely clear to me. I think it would have been nice if the authors had shown the functional form of the gradient used in step 3 of Algorithm 1. The gradient step for global variational parameters (line 5 of Algorithm 1) uses the very first prediction of local parameters (thus ignoring the optimisation in step 3), this is unclear to me. Perhaps I am missing a fundamental reason why that has to be the case (either way, please clarify). The authors argue that this optimisation turns out helpful to modelling sparse data because there is evidence that the generative model p_ heta(x|z) suffers from poor initialisation. Please, discuss why you expect the initialisation problem to be worse in the case of sparse data. The final contribution is a neat procedure to derive word embeddings from the generative model parameters. These embeddings are then used to interpret what the model has learnt. Interestingly, these word embeddings are context-sensitive once that the latent variable models an entire document. About Figures 2a and 2b: the caption says that solid lines indicate validation perplexity for M=1 (no optimisation of local parameters) and dashed lines indicate M=100 (100 iterations of optimisation of local parameters), but the legends of the figures suggest a different reading. If I interpret the figures based on the caption, then it seems that indeed deeper networks exposed to more data benefit from optimisation of local parameters. Are the authors pretty sure that in Figure 2b models with M=1 have reached a plateau (so that longer training would not allow them to catch up with M=100 curves)? As the authors explain in the caption, x-axis is not comparable on running time, thus the question. The analysis of singular values seems like an interesting way to investigate how the model is using its capacity. However, I can barely interpret Figures 2c and 2d, I think the authors could have walked readers through them. As for the word embedding I am missing an evaluation on a predictive task. Also, while illustrative, Table 2b is barely reproducible. The text reads *we create a document comprising a subset of words in the the context’s Wikipedia page.* which is rather vague. I wonder whether this construct needs to be carefully designed in order to get Table 2b. In sum, I have a feeling that the inference technique and the embedding technique are both useful, but perhaps they should have been presented separately so that each could have been explored in greater depth. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInference and Introspection in Deep Generative Models of Sparse Data\nDeep generative models such as deep latent Gaussian models (DLGMs) are powerful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to sparse, high-dimensional integer data such as word counts or product ratings. One reason is that the standard training procedures find poor local optima when applied to such data. We propose two techniques that alleviate this problem, significantly improving our ability to fit DLGMs to sparse, high-dimensional data. Having fit these models, we are faced with another challenge: how to use and interpret the representation that we have learned? To that end, we propose a method that extracts distributed representations of features via a simple linearization of the model.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper claims improved inference for density estimation of sparse data (here text documents) using deep generative Gaussian models (variational auto-encoders), and a method for deriving word embeddings from the model*s generative parameters that allows for a degree of interpretability similar to that of Bayesian generative topic models. To discuss the contributions I will quickly review the generative story in the paper: first a K-dimensional latent representation is sampled from a multivariate Gaussian, then an MLP (with parameters \theta) predicts unnormalised potentials over a vocabulary of V words, the potentials are exponentiated and normalised to make the parameters of a multinomial from where word observations are repeatedly sampled to make a document. Here intractable inference is replaced by the VAE formulation where an inference network (with parameters phi) independently predicts for each document the mean and variance of a normal distribution (amenable to reparameterised gradient computation). The first, and rather trivial, contribution is to use tf-idf features to inject first order statistics (a global information) into local observations. The authors claim that this is particularly helpful in the case of sparse data such as text. The second contribution is more interesting. In optimising generative parameters (\theta) and variational parameters (phi), the authors turn to a treatment which is reminiscent of the original SVI procedure. That is, they see the variational parameters phi as *global* variational parameters, and the predicted mean mu(x) and covariance Sigma(x) of each observation x are treated as *local* variational parameters. In the original VAE, local parameters are not directly optimised, instead they are indirectly optimised via optimisation of the global parameters utilised in their prediction (shared MLP parameters). Here, local parameters are optimised holding generative parameters fixed (line 3 of Algorithm 1). The optimised local parameters are then used in the gradient step of the generative parameters (line 4 of Algorithm 1). Finally, global variational parameters are also updated (line 5). Whereas indeed other authors have proposed to optimise local parameters, I think that deriving this procedure from the more familiar SVI makes the contribution less of a trick and easier to relate to. Some things aren*t entirely clear to me. I think it would have been nice if the authors had shown the functional form of the gradient used in step 3 of Algorithm 1. The gradient step for global variational parameters (line 5 of Algorithm 1) uses the very first prediction of local parameters (thus ignoring the optimisation in step 3), this is unclear to me. Perhaps I am missing a fundamental reason why that has to be the case (either way, please clarify). The authors argue that this optimisation turns out helpful to modelling sparse data because there is evidence that the generative model p_\theta(x|z) suffers from poor initialisation. Please, discuss why you expect the initialisation problem to be worse in the case of sparse data. The final contribution is a neat procedure to derive word embeddings from the generative model parameters. These embeddings are then used to interpret what the model has learnt. Interestingly, these word embeddings are context-sensitive once that the latent variable models an entire document. About Figures 2a and 2b: the caption says that solid lines indicate validation perplexity for M=1 (no optimisation of local parameters) and dashed lines indicate M=100 (100 iterations of optimisation of local parameters), but the legends of the figures suggest a different reading. If I interpret the figures based on the caption, then it seems that indeed deeper networks exposed to more data benefit from optimisation of local parameters. Are the authors pretty sure that in Figure 2b models with M=1 have reached a plateau (so that longer training would not allow them to catch up with M=100 curves)? As the authors explain in the caption, x-axis is not comparable on running time, thus the question. The analysis of singular values seems like an interesting way to investigate how the model is using its capacity. However, I can barely interpret Figures 2c and 2d, I think the authors could have walked readers through them. As for the word embedding I am missing an evaluation on a predictive task. Also, while illustrative, Table 2b is barely reproducible. The text reads *we create a document comprising a subset of words in the the context’s Wikipedia page.* which is rather vague. I wonder whether this construct needs to be carefully designed in order to get Table 2b. In sum, I have a feeling that the inference technique and the embedding technique are both useful, but perhaps they should have been presented separately so that each could have been explored in greater depth.",
"role": "assistant"
}
]
| [
"The paper claims improved inference for density estimation of sparse data (here text documents) using deep generative Gaussian models (variational auto-encoders), and a method for deriving word embeddings from the model*s generative parameters that allows for a degree of interpretability similar to that of Bayesian generative topic models.",
"To discuss the contributions I will quickly review the generative story in the paper: first a K-dimensional latent representation is sampled from a multivariate Gaussian, then an MLP (with parameters \theta) predicts unnormalised potentials over a vocabulary of V words, the potentials are exponentiated and normalised to make the parameters of a multinomial from where word observations are repeatedly sampled to make a document.",
"Here intractable inference is replaced by the VAE formulation where an inference network (with parameters phi) independently predicts for each document the mean and variance of a normal distribution (amenable to reparameterised gradient computation).",
"The first, and rather trivial, contribution is to use tf-idf features to inject first order statistics (a global information) into local observations.",
"The authors claim that this is particularly helpful in the case of sparse data such as text.",
"The second contribution is more interesting.",
"In optimising generative parameters (\theta) and variational parameters (phi), the authors turn to a treatment which is reminiscent of the original SVI procedure.",
"That is, they see the variational parameters phi as *global* variational parameters, and the predicted mean mu(x) and covariance Sigma(x) of each observation x are treated as *local* variational parameters.",
"In the original VAE, local parameters are not directly optimised, instead they are indirectly optimised via optimisation of the global parameters utilised in their prediction (shared MLP parameters).",
"Here, local parameters are optimised holding generative parameters fixed (line 3 of Algorithm 1).",
"The optimised local parameters are then used in the gradient step of the generative parameters (line 4 of Algorithm 1).",
"Finally, global variational parameters are also updated (line 5).",
"Whereas indeed other authors have proposed to optimise local parameters, I think that deriving this procedure from the more familiar SVI makes the contribution less of a trick and easier to relate to.",
"Some things aren*t entirely clear to me.",
"I think it would have been nice if the authors had shown the functional form of the gradient used in step 3 of Algorithm 1.",
"The gradient step for global variational parameters (line 5 of Algorithm 1) uses the very first prediction of local parameters (thus ignoring the optimisation in step 3), this is unclear to me.",
"Perhaps I am missing a fundamental reason why that has to be the case (either way, please clarify).",
"The authors argue that this optimisation turns out helpful to modelling sparse data because there is evidence that the generative model p_\theta(x|z) suffers from poor initialisation.",
"Please, discuss why you expect the initialisation problem to be worse in the case of sparse data.",
"The final contribution is a neat procedure to derive word embeddings from the generative model parameters.",
"These embeddings are then used to interpret what the model has learnt.",
"Interestingly, these word embeddings are context-sensitive once that the latent variable models an entire document.",
"About Figures 2a and 2b: the caption says that solid lines indicate validation perplexity for M=1 (no optimisation of local parameters) and dashed lines indicate M=100 (100 iterations of optimisation of local parameters), but the legends of the figures suggest a different reading.",
"If I interpret the figures based on the caption, then it seems that indeed deeper networks exposed to more data benefit from optimisation of local parameters.",
"Are the authors pretty sure that in Figure 2b models with M=1 have reached a plateau (so that longer training would not allow them to catch up with M=100 curves)?",
"As the authors explain in the caption, x-axis is not comparable on running time, thus the question.",
"The analysis of singular values seems like an interesting way to investigate how the model is using its capacity.",
"However, I can barely interpret Figures 2c and 2d, I think the authors could have walked readers through them.",
"As for the word embedding I am missing an evaluation on a predictive task.",
"Also, while illustrative, Table 2b is barely reproducible.",
"The text reads *we create a document comprising a subset of words in the the context’s Wikipedia page.",
"* which is rather vague.",
"I wonder whether this construct needs to be carefully designed in order to get Table 2b.",
"In sum, I have a feeling that the inference technique and the embedding technique are both useful, but perhaps they should have been presented separately so that each could have been explored in greater depth."
] | {
"criticism": 9,
"example": 7,
"importance_and_relevance": 3,
"materials_and_methods": 26,
"praise": 6,
"presentation_and_reporting": 14,
"results_and_discussion": 5,
"suggestion_and_solution": 7,
"total": 34
} | 0.264706 | 0.205882 | 0.088235 | 0.764706 | 0.176471 | 0.411765 | 0.147059 | 0.205882 | 34 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.264706 | -4.695346 | 6.960052 |
Inference and Introspection in Deep Generative Models of Sparse Data | Deep generative models such as deep latent Gaussian models (DLGMs) are powerful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to sparse, high-dimensional integer data such as word counts or product ratings. One reason is that the standard training procedures find poor local optima when applied to such data. We propose two techniques that alleviate this problem, significantly improving our ability to fit DLGMs to sparse, high-dimensional data. Having fit these models, we are faced with another challenge: how to use and interpret the representation that we have learned? To that end, we propose a method that extracts distributed representations of features via a simple linearization of the model. | This paper introduces three tricks for training deep latent variable models on sparse discrete data: 1) tf-idf weighting 2) Iteratively optimizing variational parameters after initializing them with an inference network 3) A technique for improving the interpretability of the deep model The first idea is sensible but rather trivial as a contribution. The second idea is also sensible, but is conceptually not novel. What is new is the finding that it works well for the dataset used in this paper. The third idea is interesting, and seems to give qualitatively reasonable results. The quantitative semantic similarity results don’t seem that convincing, but I am not very familiar with the relevant literature and therefore cannot make a confident judgement on this issue. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInference and Introspection in Deep Generative Models of Sparse Data\nDeep generative models such as deep latent Gaussian models (DLGMs) are powerful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to sparse, high-dimensional integer data such as word counts or product ratings. One reason is that the standard training procedures find poor local optima when applied to such data. We propose two techniques that alleviate this problem, significantly improving our ability to fit DLGMs to sparse, high-dimensional data. Having fit these models, we are faced with another challenge: how to use and interpret the representation that we have learned? To that end, we propose a method that extracts distributed representations of features via a simple linearization of the model.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces three tricks for training deep latent variable models on sparse discrete data: 1) tf-idf weighting 2) Iteratively optimizing variational parameters after initializing them with an inference network 3) A technique for improving the interpretability of the deep model The first idea is sensible but rather trivial as a contribution. The second idea is also sensible, but is conceptually not novel. What is new is the finding that it works well for the dataset used in this paper. The third idea is interesting, and seems to give qualitatively reasonable results. The quantitative semantic similarity results don’t seem that convincing, but I am not very familiar with the relevant literature and therefore cannot make a confident judgement on this issue.",
"role": "assistant"
}
]
| [
"This paper introduces three tricks for training deep latent variable models on sparse discrete data: 1) tf-idf weighting 2) Iteratively optimizing variational parameters after initializing them with an inference network 3) A technique for improving the interpretability of the deep model The first idea is sensible but rather trivial as a contribution.",
"The second idea is also sensible, but is conceptually not novel.",
"What is new is the finding that it works well for the dataset used in this paper.",
"The third idea is interesting, and seems to give qualitatively reasonable results.",
"The quantitative semantic similarity results don’t seem that convincing, but I am not very familiar with the relevant literature and therefore cannot make a confident judgement on this issue."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 0,
"total": 5
} | 0.4 | 0 | 0.4 | 0.4 | 0.4 | 0 | 0.6 | 0 | 5 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 2.2 | 1.189924 | 1.010076 |
Inference and Introspection in Deep Generative Models of Sparse Data | Deep generative models such as deep latent Gaussian models (DLGMs) are powerful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to sparse, high-dimensional integer data such as word counts or product ratings. One reason is that the standard training procedures find poor local optima when applied to such data. We propose two techniques that alleviate this problem, significantly improving our ability to fit DLGMs to sparse, high-dimensional data. Having fit these models, we are faced with another challenge: how to use and interpret the representation that we have learned? To that end, we propose a method that extracts distributed representations of features via a simple linearization of the model. | This paper presents a small trick to improve the model quality of variational autoencoders (further optimizing the ELBO while initializing it from the predictions of the q network, instead of just using those directly) and the idea of using Jacobian vectors to replace simple embeddings when interpreting variational autoencoders. The idea of the Jacobian as a natural replacement for embeddings is interesting, as it does seem to cleanly generalize the notion of embeddings from linear models. It*d be interesting to see comparisons with other work seeking to provide context-specific embeddings, either by clustering or by smarter techniques (like Neelakantan et al, Efficient non-parametric estimation of multiple embeddings per word in vector space, or Chen et al A Unified Model for Word Sense Representation and Disambiguation). With the evidence provided in the experimental section of the paper it*s hard to be convinced that the Jacobian of VAE-generated embeddings is substantially better at being context-sensitive than prior work. Similarly, the idea of further optimizing the ELBO is interesting but not fully explored. It*s unclear, for example, what is the tradeoff between the complexity of the q network and steps further optimizing the ELBO, in terms of compute versus accuracy. Overall the ideas in this paper are good but I*d like to see them a little more fleshed out. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInference and Introspection in Deep Generative Models of Sparse Data\nDeep generative models such as deep latent Gaussian models (DLGMs) are powerful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to sparse, high-dimensional integer data such as word counts or product ratings. One reason is that the standard training procedures find poor local optima when applied to such data. We propose two techniques that alleviate this problem, significantly improving our ability to fit DLGMs to sparse, high-dimensional data. Having fit these models, we are faced with another challenge: how to use and interpret the representation that we have learned? To that end, we propose a method that extracts distributed representations of features via a simple linearization of the model.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a small trick to improve the model quality of variational autoencoders (further optimizing the ELBO while initializing it from the predictions of the q network, instead of just using those directly) and the idea of using Jacobian vectors to replace simple embeddings when interpreting variational autoencoders. The idea of the Jacobian as a natural replacement for embeddings is interesting, as it does seem to cleanly generalize the notion of embeddings from linear models. It*d be interesting to see comparisons with other work seeking to provide context-specific embeddings, either by clustering or by smarter techniques (like Neelakantan et al, Efficient non-parametric estimation of multiple embeddings per word in vector space, or Chen et al A Unified Model for Word Sense Representation and Disambiguation). With the evidence provided in the experimental section of the paper it*s hard to be convinced that the Jacobian of VAE-generated embeddings is substantially better at being context-sensitive than prior work. Similarly, the idea of further optimizing the ELBO is interesting but not fully explored. It*s unclear, for example, what is the tradeoff between the complexity of the q network and steps further optimizing the ELBO, in terms of compute versus accuracy. Overall the ideas in this paper are good but I*d like to see them a little more fleshed out.",
"role": "assistant"
}
]
| [
"This paper presents a small trick to improve the model quality of variational autoencoders (further optimizing the ELBO while initializing it from the predictions of the q network, instead of just using those directly) and the idea of using Jacobian vectors to replace simple embeddings when interpreting variational autoencoders.",
"The idea of the Jacobian as a natural replacement for embeddings is interesting, as it does seem to cleanly generalize the notion of embeddings from linear models.",
"It*d be interesting to see comparisons with other work seeking to provide context-specific embeddings, either by clustering or by smarter techniques (like Neelakantan et al, Efficient non-parametric estimation of multiple embeddings per word in vector space, or Chen et al A Unified Model for Word Sense Representation and Disambiguation).",
"With the evidence provided in the experimental section of the paper it*s hard to be convinced that the Jacobian of VAE-generated embeddings is substantially better at being context-sensitive than prior work.",
"Similarly, the idea of further optimizing the ELBO is interesting but not fully explored.",
"It*s unclear, for example, what is the tradeoff between the complexity of the q network and steps further optimizing the ELBO, in terms of compute versus accuracy.",
"Overall the ideas in this paper are good but I*d like to see them a little more fleshed out."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 3,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 7
} | 0.428571 | 0 | 0.428571 | 0.714286 | 0.428571 | 0 | 0.285714 | 0.285714 | 7 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.571429 | 2.003261 | 0.568167 |
Neural Combinatorial Optimization with Reinforcement Learning | This paper presents a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. We compare learning the network parameters on a set of training graphs against learning them on individual test graphs. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. Applied to the KnapSack, another NP-hard problem, the same method obtains optimal solutions for instances with up to 200 items. These results, albeit still far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems. | This paper proposes to use RNN and reinforcement learning for solving combinatorial optimization problems. The use of pointer network is interesting as it enables generalization to arbitrary input size. The proposed method also *fintunes* on test examples with active search to achieve better performance. The proposed method is theoretically interesting as it shows that RNN and RL can be combined to solve combinatorial optimization problems and achieve comparable performance to traditional heuristic based algorithms. However, the lack of complexity comparison against baselines make it impossible to tell whether the proposed method has any practical value. The matter is further complicated by the fact that the proposed method runs on GPU while baselines run on CPU: it is hard to even come up with a meaningful unit of complexity. Money spent on hardware and electricity per instance may be a viable option. Further more, the performance comparisons should be taken with a grain of salt as traditional heuristic based algorithms can often give better performance if allowed more computation, which is not controlled across algorithms. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Combinatorial Optimization with Reinforcement Learning\nThis paper presents a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. We compare learning the network parameters on a set of training graphs against learning them on individual test graphs. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. Applied to the KnapSack, another NP-hard problem, the same method obtains optimal solutions for instances with up to 200 items. These results, albeit still far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to use RNN and reinforcement learning for solving combinatorial optimization problems. The use of pointer network is interesting as it enables generalization to arbitrary input size. The proposed method also *fintunes* on test examples with active search to achieve better performance. The proposed method is theoretically interesting as it shows that RNN and RL can be combined to solve combinatorial optimization problems and achieve comparable performance to traditional heuristic based algorithms. However, the lack of complexity comparison against baselines make it impossible to tell whether the proposed method has any practical value. The matter is further complicated by the fact that the proposed method runs on GPU while baselines run on CPU: it is hard to even come up with a meaningful unit of complexity. Money spent on hardware and electricity per instance may be a viable option. Further more, the performance comparisons should be taken with a grain of salt as traditional heuristic based algorithms can often give better performance if allowed more computation, which is not controlled across algorithms.",
"role": "assistant"
}
]
| [
"This paper proposes to use RNN and reinforcement learning for solving combinatorial optimization problems.",
"The use of pointer network is interesting as it enables generalization to arbitrary input size.",
"The proposed method also *fintunes* on test examples with active search to achieve better performance.",
"The proposed method is theoretically interesting as it shows that RNN and RL can be combined to solve combinatorial optimization problems and achieve comparable performance to traditional heuristic based algorithms.",
"However, the lack of complexity comparison against baselines make it impossible to tell whether the proposed method has any practical value.",
"The matter is further complicated by the fact that the proposed method runs on GPU while baselines run on CPU: it is hard to even come up with a meaningful unit of complexity.",
"Money spent on hardware and electricity per instance may be a viable option.",
"Further more, the performance comparisons should be taken with a grain of salt as traditional heuristic based algorithms can often give better performance if allowed more computation, which is not controlled across algorithms."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 8
} | 0.25 | 0 | 0.375 | 0.875 | 0.25 | 0 | 0.25 | 0.125 | 8 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
Here*s My Point: Argumentation Mining with Pointer Networks | One of the major goals in automated argumentation mining is to uncover the argument structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining, focusing on extracting links between argument components, with a secondary focus on classifying types of argument components. In order to solve this problem, we propose to use a modification of a Pointer Network architecture. A Pointer Network is appealing for this task for the following reasons: 1) It takes into account the sequential nature of argument components; 2) By construction, it enforces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed model achieves state-of-the-art results on two separate evaluation corpora. Furthermore, our results show that optimizing for both tasks, as well as adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance. | This paper proposes a model for the task of argumentation mining (labeling the set of relationships between statements expressed as sentence-sized spans in a short text). The model combines a pointer network component that identifies links between statements and a classifier that predicts the roles of these statements. The resulting model works well: It outperforms strong baselines, even on datasets with fewer than 100 training examples. I don*t see any major technical issues with this paper, and the results are strong. I am concerned, though, that the paper doesn*t make a substantial novel contribution to representation learning. It focuses on ways to adapt reasonably mature techniques to a novel NLP problem. I think that one of the ACL conferences would be a better fit for this work. The choice of a pointer network for this problem seems reasonable, though (as noted by other commenters) the paper does not make any substantial comparison with other possible ways of producing trees. The paper does a solid job at breaking down the results quantitatively, but I would appreciate some examples of model output and some qualitative error analysis. Detail notes: - Figure 2 appears to have an error. You report that the decoder produces a distribution over input indices only, but you show an example of the network pointing to an output index in one case. - I don*t think *Wei12* is a name. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHere*s My Point: Argumentation Mining with Pointer Networks\nOne of the major goals in automated argumentation mining is to uncover the argument structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining, focusing on extracting links between argument components, with a secondary focus on classifying types of argument components. In order to solve this problem, we propose to use a modification of a Pointer Network architecture. A Pointer Network is appealing for this task for the following reasons: 1) It takes into account the sequential nature of argument components; 2) By construction, it enforces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed model achieves state-of-the-art results on two separate evaluation corpora. Furthermore, our results show that optimizing for both tasks, as well as adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a model for the task of argumentation mining (labeling the set of relationships between statements expressed as sentence-sized spans in a short text). The model combines a pointer network component that identifies links between statements and a classifier that predicts the roles of these statements. The resulting model works well: It outperforms strong baselines, even on datasets with fewer than 100 training examples. I don*t see any major technical issues with this paper, and the results are strong. I am concerned, though, that the paper doesn*t make a substantial novel contribution to representation learning. It focuses on ways to adapt reasonably mature techniques to a novel NLP problem. I think that one of the ACL conferences would be a better fit for this work. The choice of a pointer network for this problem seems reasonable, though (as noted by other commenters) the paper does not make any substantial comparison with other possible ways of producing trees. The paper does a solid job at breaking down the results quantitatively, but I would appreciate some examples of model output and some qualitative error analysis. Detail notes: - Figure 2 appears to have an error. You report that the decoder produces a distribution over input indices only, but you show an example of the network pointing to an output index in one case. - I don*t think *Wei12* is a name.",
"role": "assistant"
}
]
| [
"This paper proposes a model for the task of argumentation mining (labeling the set of relationships between statements expressed as sentence-sized spans in a short text).",
"The model combines a pointer network component that identifies links between statements and a classifier that predicts the roles of these statements.",
"The resulting model works well: It outperforms strong baselines, even on datasets with fewer than 100 training examples.",
"I don*t see any major technical issues with this paper, and the results are strong.",
"I am concerned, though, that the paper doesn*t make a substantial novel contribution to representation learning.",
"It focuses on ways to adapt reasonably mature techniques to a novel NLP problem.",
"I think that one of the ACL conferences would be a better fit for this work.",
"The choice of a pointer network for this problem seems reasonable, though (as noted by other commenters) the paper does not make any substantial comparison with other possible ways of producing trees.",
"The paper does a solid job at breaking down the results quantitatively, but I would appreciate some examples of model output and some qualitative error analysis.",
"Detail notes: - Figure 2 appears to have an error.",
"You report that the decoder produces a distribution over input indices only, but you show an example of the network pointing to an output index in one case.",
"- I don*t think *Wei12* is a name."
] | {
"criticism": 4,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 7,
"praise": 4,
"presentation_and_reporting": 4,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 12
} | 0.333333 | 0.166667 | 0.166667 | 0.583333 | 0.333333 | 0.333333 | 0.166667 | 0.166667 | 12 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 2.234218 | 0.015782 |
Tree-Structured Variational Autoencoder | Many kinds of variable-sized data we would like to model contain an internal hierarchical structure in the form of a tree, including source code, formal logical statements, and natural language sentences with parse trees. For such data it is natural to consider a model with matching computational structure. In this work, we introduce a variational autoencoder-based generative model for tree-structured data. We evaluate our model on a synthetic dataset, and a dataset with applications to automated theorem proving. By learning a latent representation over trees, our model can achieve similar test log likelihood to a standard autoregressive decoder, but with the number of sequentially dependent computations proportional to the depth of the tree instead of the number of nodes in the tree. | The method overall seems to be a very interesting structural approach to variational autoencoders, however it seems to lack motivation as well as the application areas sufficient to prove its effectiveness. I see the attractiveness of using structural information in this context and I find it more intuitive than using a flat sequence representation, especially when there is a clear structure in the data. However experimental results seem to fail to be convincing in that regard. One issue is the lack of a variety of applications in general, the experiments seem to be very limited in that regard, considering that the paper itself speaks about natural language applications. It would be interesting to use the latent representations learned with the model for some other end task and see how much it impacts the success of that end task compared to various baselines. In my opinion, the paper has a potentially strong idea however in needs stronger results (and possibly in a wider variety of applications) as a proof of concept. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTree-Structured Variational Autoencoder\nMany kinds of variable-sized data we would like to model contain an internal hierarchical structure in the form of a tree, including source code, formal logical statements, and natural language sentences with parse trees. For such data it is natural to consider a model with matching computational structure. In this work, we introduce a variational autoencoder-based generative model for tree-structured data. We evaluate our model on a synthetic dataset, and a dataset with applications to automated theorem proving. By learning a latent representation over trees, our model can achieve similar test log likelihood to a standard autoregressive decoder, but with the number of sequentially dependent computations proportional to the depth of the tree instead of the number of nodes in the tree.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe method overall seems to be a very interesting structural approach to variational autoencoders, however it seems to lack motivation as well as the application areas sufficient to prove its effectiveness. I see the attractiveness of using structural information in this context and I find it more intuitive than using a flat sequence representation, especially when there is a clear structure in the data. However experimental results seem to fail to be convincing in that regard. One issue is the lack of a variety of applications in general, the experiments seem to be very limited in that regard, considering that the paper itself speaks about natural language applications. It would be interesting to use the latent representations learned with the model for some other end task and see how much it impacts the success of that end task compared to various baselines. In my opinion, the paper has a potentially strong idea however in needs stronger results (and possibly in a wider variety of applications) as a proof of concept.",
"role": "assistant"
}
]
| [
"The method overall seems to be a very interesting structural approach to variational autoencoders, however it seems to lack motivation as well as the application areas sufficient to prove its effectiveness.",
"I see the attractiveness of using structural information in this context and I find it more intuitive than using a flat sequence representation, especially when there is a clear structure in the data.",
"However experimental results seem to fail to be convincing in that regard.",
"One issue is the lack of a variety of applications in general, the experiments seem to be very limited in that regard, considering that the paper itself speaks about natural language applications.",
"It would be interesting to use the latent representations learned with the model for some other end task and see how much it impacts the success of that end task compared to various baselines.",
"In my opinion, the paper has a potentially strong idea however in needs stronger results (and possibly in a wider variety of applications) as a proof of concept."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 4,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 6
} | 0.5 | 0 | 0.5 | 0.666667 | 0.5 | 0.333333 | 0.333333 | 0.166667 | 6 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3 | 2.226661 | 0.773339 |
Tree-Structured Variational Autoencoder | Many kinds of variable-sized data we would like to model contain an internal hierarchical structure in the form of a tree, including source code, formal logical statements, and natural language sentences with parse trees. For such data it is natural to consider a model with matching computational structure. In this work, we introduce a variational autoencoder-based generative model for tree-structured data. We evaluate our model on a synthetic dataset, and a dataset with applications to automated theorem proving. By learning a latent representation over trees, our model can achieve similar test log likelihood to a standard autoregressive decoder, but with the number of sequentially dependent computations proportional to the depth of the tree instead of the number of nodes in the tree. | This paper introduces a novel extension of the variational autoencoder to arbitrary tree-structured outputs. Experiments are conducted on a synthetic arithmetic expression dataset and a first-order logic proof clause dataset in order to evaluate its density modeling performance. Pros: + The paper is clear and well-written. + The tree-structure definition is sufficiently complete to capture a wide variety of tree types found in real-world situations. + The tree generation and encoding procedure is elegant and well-articulated. + The experiments, though limited in scope, are relatively thorough. The use of IWAE to obtain a better estimate of log likelihoods is a particularly nice touch. Cons: - The performance gain over a baseline sequential model is marginal. - The experiments are limited in scope, both in the datasets considered and in the evaluation metrics used to compare the model with other approaches. Specifically: (a) there is only one set of results on a real-world dataset and in that case the proposed model performs worse than the baseline, and (b) there is no evaluation of the learned latent representation with respect to other tasks such as classification. - The ability of the model to generate trees in time proportional to the depth of the tree is proposed as a benefit of the approach, though this is not empirically validated in the experiments. The procedures to generate and encode trees are clever in their repeated use of common operations. The weight sharing and gating operations seem important for this model to perform well but it is difficult to assess their utility without an ablation (in Table 1 and 2 these modifications are not evaluated side-by-side). Experiments in another domain (such as modeling source code, or parse trees conditioned on a sentence) would help in demonstrating the utility of this model. Overall the model seems promising and applicable to a variety of data but the lack of breadth in the experiments is a concern. * Section 3.1: *We distinguish three types* => two * Section 3.6: The exposition of the variable-sized latent state is slightly confusing because the issue of how many z*s to generate is not discussed. * Section 4.2-4.3: When generating the datasets, did you verify that the test set is disjoint from the training set? * Table 1: Is there a particular reason why the variable latent results are missing for the depth 11 trees? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTree-Structured Variational Autoencoder\nMany kinds of variable-sized data we would like to model contain an internal hierarchical structure in the form of a tree, including source code, formal logical statements, and natural language sentences with parse trees. For such data it is natural to consider a model with matching computational structure. In this work, we introduce a variational autoencoder-based generative model for tree-structured data. We evaluate our model on a synthetic dataset, and a dataset with applications to automated theorem proving. By learning a latent representation over trees, our model can achieve similar test log likelihood to a standard autoregressive decoder, but with the number of sequentially dependent computations proportional to the depth of the tree instead of the number of nodes in the tree.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a novel extension of the variational autoencoder to arbitrary tree-structured outputs. Experiments are conducted on a synthetic arithmetic expression dataset and a first-order logic proof clause dataset in order to evaluate its density modeling performance. Pros: + The paper is clear and well-written. + The tree-structure definition is sufficiently complete to capture a wide variety of tree types found in real-world situations. + The tree generation and encoding procedure is elegant and well-articulated. + The experiments, though limited in scope, are relatively thorough. The use of IWAE to obtain a better estimate of log likelihoods is a particularly nice touch. Cons: - The performance gain over a baseline sequential model is marginal. - The experiments are limited in scope, both in the datasets considered and in the evaluation metrics used to compare the model with other approaches. Specifically: (a) there is only one set of results on a real-world dataset and in that case the proposed model performs worse than the baseline, and (b) there is no evaluation of the learned latent representation with respect to other tasks such as classification. - The ability of the model to generate trees in time proportional to the depth of the tree is proposed as a benefit of the approach, though this is not empirically validated in the experiments. The procedures to generate and encode trees are clever in their repeated use of common operations. The weight sharing and gating operations seem important for this model to perform well but it is difficult to assess their utility without an ablation (in Table 1 and 2 these modifications are not evaluated side-by-side). Experiments in another domain (such as modeling source code, or parse trees conditioned on a sentence) would help in demonstrating the utility of this model. Overall the model seems promising and applicable to a variety of data but the lack of breadth in the experiments is a concern. * Section 3.1: *We distinguish three types* => two * Section 3.6: The exposition of the variable-sized latent state is slightly confusing because the issue of how many z*s to generate is not discussed. * Section 4.2-4.3: When generating the datasets, did you verify that the test set is disjoint from the training set? * Table 1: Is there a particular reason why the variable latent results are missing for the depth 11 trees?",
"role": "assistant"
}
]
| [
"This paper introduces a novel extension of the variational autoencoder to arbitrary tree-structured outputs.",
"Experiments are conducted on a synthetic arithmetic expression dataset and a first-order logic proof clause dataset in order to evaluate its density modeling performance.",
"Pros: + The paper is clear and well-written.",
"+ The tree-structure definition is sufficiently complete to capture a wide variety of tree types found in real-world situations.",
"+ The tree generation and encoding procedure is elegant and well-articulated.",
"+ The experiments, though limited in scope, are relatively thorough.",
"The use of IWAE to obtain a better estimate of log likelihoods is a particularly nice touch.",
"Cons: - The performance gain over a baseline sequential model is marginal.",
"- The experiments are limited in scope, both in the datasets considered and in the evaluation metrics used to compare the model with other approaches.",
"Specifically: (a) there is only one set of results on a real-world dataset and in that case the proposed model performs worse than the baseline, and (b) there is no evaluation of the learned latent representation with respect to other tasks such as classification.",
"- The ability of the model to generate trees in time proportional to the depth of the tree is proposed as a benefit of the approach, though this is not empirically validated in the experiments.",
"The procedures to generate and encode trees are clever in their repeated use of common operations.",
"The weight sharing and gating operations seem important for this model to perform well but it is difficult to assess their utility without an ablation (in Table 1 and 2 these modifications are not evaluated side-by-side).",
"Experiments in another domain (such as modeling source code, or parse trees conditioned on a sentence) would help in demonstrating the utility of this model.",
"Overall the model seems promising and applicable to a variety of data but the lack of breadth in the experiments is a concern.",
"* Section 3.1: *We distinguish three types* => two * Section 3.6: The exposition of the variable-sized latent state is slightly confusing because the issue of how many z*s to generate is not discussed.",
"* Section 4.2-4.3: When generating the datasets, did you verify that the test set is disjoint from the training set?",
"* Table 1: Is there a particular reason why the variable latent results are missing for the depth 11 trees?"
] | {
"criticism": 6,
"example": 3,
"importance_and_relevance": 1,
"materials_and_methods": 13,
"praise": 7,
"presentation_and_reporting": 4,
"results_and_discussion": 5,
"suggestion_and_solution": 1,
"total": 18
} | 0.333333 | 0.166667 | 0.055556 | 0.722222 | 0.388889 | 0.222222 | 0.277778 | 0.055556 | 18 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.222222 | 1.827661 | 0.394561 |
Learning Approximate Distribution-Sensitive Data Structures | We present a computational model of mental representations as data-structures which are distribution sensitive, i.e., which exploit non-uniformity in their usage patterns to reduce time or space complexity. Abstract data types equipped with axiomatic specifications specify classes of concrete data structures with equivalent logical behavior. We extend this formalism to distribution-sensitive data structures with the concept of a probabilistic axiomatic specification, which is implemented by a concrete data structure only with some probability. We employ a number of approximations to synthesize several distribution-sensitive data structures from probabilistic specification as deep neural networks, such as a stack, queue, natural number, set, and binary tree. | A method for training neural networks to mimic abstract data structures is presented. The idea of training a network to satisfy an abstract interface is very interesting and promising, but empirical support is currently too weak. The paper would be significantly strengthened if the method could be shown to be useful in a realistic application, or be shown to work better than standard RNN approaches on algorithmic learning tasks. The claims about mental representations are not well supported. I would remove the references to mind and brain, as well as the more philosophical points, or write a paper that really emphasizes one of these aspects and supports the claims. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Approximate Distribution-Sensitive Data Structures\nWe present a computational model of mental representations as data-structures which are distribution sensitive, i.e., which exploit non-uniformity in their usage patterns to reduce time or space complexity. Abstract data types equipped with axiomatic specifications specify classes of concrete data structures with equivalent logical behavior. We extend this formalism to distribution-sensitive data structures with the concept of a probabilistic axiomatic specification, which is implemented by a concrete data structure only with some probability. We employ a number of approximations to synthesize several distribution-sensitive data structures from probabilistic specification as deep neural networks, such as a stack, queue, natural number, set, and binary tree.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nA method for training neural networks to mimic abstract data structures is presented. The idea of training a network to satisfy an abstract interface is very interesting and promising, but empirical support is currently too weak. The paper would be significantly strengthened if the method could be shown to be useful in a realistic application, or be shown to work better than standard RNN approaches on algorithmic learning tasks. The claims about mental representations are not well supported. I would remove the references to mind and brain, as well as the more philosophical points, or write a paper that really emphasizes one of these aspects and supports the claims.",
"role": "assistant"
}
]
| [
"A method for training neural networks to mimic abstract data structures is presented.",
"The idea of training a network to satisfy an abstract interface is very interesting and promising, but empirical support is currently too weak.",
"The paper would be significantly strengthened if the method could be shown to be useful in a realistic application, or be shown to work better than standard RNN approaches on algorithmic learning tasks.",
"The claims about mental representations are not well supported.",
"I would remove the references to mind and brain, as well as the more philosophical points, or write a paper that really emphasizes one of these aspects and supports the claims."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 0,
"suggestion_and_solution": 2,
"total": 5
} | 0.4 | 0 | 0.4 | 0.6 | 0.2 | 0.2 | 0 | 0.4 | 5 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 2.2 | 1.189924 | 1.010076 |
Taming the waves: sine as activation function in deep neural networks | Most deep neural networks use non-periodic and monotonic—or at least quasiconvex— activation functions. While sinusoidal activation functions have been successfully used for specific applications, they remain largely ignored and regarded as difficult to train. In this paper we formally characterize why these networks can indeed often be difficult to train even in very simple scenarios, and describe how the presence of infinitely many and shallow local minima emerges from the architecture. We also provide an explanation to the good performance achieved on a typical classification task, by showing that for several network architectures the presence of the periodic cycles is largely ignored when the learning is successful. Finally, we show that there are non-trivial tasks—such as learning algorithms—where networks using sinusoidal activations can learn faster than more established monotonic functions. | An interesting study of using Sine as activation function showing successful training of models using Sine. However the scope of tasks this is applied to is a bit too limited to be convincing. Maybe showing good results on more important tasks in addition to current toy tasks would make a stronger case? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTaming the waves: sine as activation function in deep neural networks\nMost deep neural networks use non-periodic and monotonic—or at least quasiconvex— activation functions. While sinusoidal activation functions have been successfully used for specific applications, they remain largely ignored and regarded as difficult to train. In this paper we formally characterize why these networks can indeed often be difficult to train even in very simple scenarios, and describe how the presence of infinitely many and shallow local minima emerges from the architecture. We also provide an explanation to the good performance achieved on a typical classification task, by showing that for several network architectures the presence of the periodic cycles is largely ignored when the learning is successful. Finally, we show that there are non-trivial tasks—such as learning algorithms—where networks using sinusoidal activations can learn faster than more established monotonic functions.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nAn interesting study of using Sine as activation function showing successful training of models using Sine. However the scope of tasks this is applied to is a bit too limited to be convincing. Maybe showing good results on more important tasks in addition to current toy tasks would make a stronger case?",
"role": "assistant"
}
]
| [
"An interesting study of using Sine as activation function showing successful training of models using Sine.",
"However the scope of tasks this is applied to is a bit too limited to be convincing.",
"Maybe showing good results on more important tasks in addition to current toy tasks would make a stronger case?"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0.333333 | 0 | 0.333333 | 0.666667 | 0.666667 | 0 | 0.333333 | 0.333333 | 3 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.666667 | 1.088424 | 1.578243 |
Collaborative Deep Embedding via Dual Networks | Despite the long history of research on recommender systems, current approaches still face a number of challenges in practice, e.g. the difficulties in handling new items, the high diversity of user interests, and the noisiness and sparsity of observations. Many of such difficulties stem from the lack of expressive power to capture the complex relations between items and users. This paper presents a new method to tackle this problem, called Collaborative Deep Embedding. In this method, a pair of dual networks, one for encoding items and the other for users, are jointly trained in a collaborative fashion. Particularly, both networks produce embeddings at multiple aligned levels, which, when combined together, can accurately predict the matching between items and users. Compared to existing methods, the proposed one not only provides greater expressive power to capture complex matching relations, but also generalizes better to unseen items or users. On multiple real-world datasets, this method outperforms the state of the art. | The responses to the pre-review questions are not strong; especially w.r.t. the question about dataset density and why the dataset had to be subsampled, the authors responded that subsampling is common in recommender systems work, including the papers cited. This isn*t a particularly strong justification of why subsampling is a good idea, and in particular doesn*t answer the question of *how would the results look without subsampling,* which I think is a question that could easily have been answered directly. Especially given that the goal of dealing with the cold-start issue is so heavily emphasized in the paper, in seems odd to sample the data to reduce sparsity. Other than that, the pre-review questions seem to have been answered satisfactorily. The contribution of the paper is to propose user and item embedding methods, as a means of learning complex non-linear interactions between users and items. This is fairly similar to recent work on deep RS, though the network formulation has some differences. Overall this is an reasonably put together paper that makes a contribution in an important area, though there are still some shortcomings that should be addressed, namely: 1) The evaluation is unusual. Recall@M is the only result reported, though this is not usually an evaluation seen in recommender systems research. At the very least other performance measures (rmse or AUC) should be reported for completeness, even if the results are not strong 2) Given that the contribution is fairly simple (i.e., the *standard* recommender systems task, but with a new model) it*s a shame that unusual data samples have to be taken. This should be a case where it*s possible to report results against competing methods using *exactly* the same data they used, and exactly the same error measure, for the fairest comparison possible. Without the above it*s hard to tell how much the performance improvements are really due to the method being better, versus the choice of datasets and the choice of loss functions. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCollaborative Deep Embedding via Dual Networks\nDespite the long history of research on recommender systems, current approaches still face a number of challenges in practice, e.g. the difficulties in handling new items, the high diversity of user interests, and the noisiness and sparsity of observations. Many of such difficulties stem from the lack of expressive power to capture the complex relations between items and users. This paper presents a new method to tackle this problem, called Collaborative Deep Embedding. In this method, a pair of dual networks, one for encoding items and the other for users, are jointly trained in a collaborative fashion. Particularly, both networks produce embeddings at multiple aligned levels, which, when combined together, can accurately predict the matching between items and users. Compared to existing methods, the proposed one not only provides greater expressive power to capture complex matching relations, but also generalizes better to unseen items or users. On multiple real-world datasets, this method outperforms the state of the art.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe responses to the pre-review questions are not strong; especially w.r.t. the question about dataset density and why the dataset had to be subsampled, the authors responded that subsampling is common in recommender systems work, including the papers cited. This isn*t a particularly strong justification of why subsampling is a good idea, and in particular doesn*t answer the question of *how would the results look without subsampling,* which I think is a question that could easily have been answered directly. Especially given that the goal of dealing with the cold-start issue is so heavily emphasized in the paper, in seems odd to sample the data to reduce sparsity. Other than that, the pre-review questions seem to have been answered satisfactorily. The contribution of the paper is to propose user and item embedding methods, as a means of learning complex non-linear interactions between users and items. This is fairly similar to recent work on deep RS, though the network formulation has some differences. Overall this is an reasonably put together paper that makes a contribution in an important area, though there are still some shortcomings that should be addressed, namely: 1) The evaluation is unusual. Recall@M is the only result reported, though this is not usually an evaluation seen in recommender systems research. At the very least other performance measures (rmse or AUC) should be reported for completeness, even if the results are not strong 2) Given that the contribution is fairly simple (i.e., the *standard* recommender systems task, but with a new model) it*s a shame that unusual data samples have to be taken. This should be a case where it*s possible to report results against competing methods using *exactly* the same data they used, and exactly the same error measure, for the fairest comparison possible. Without the above it*s hard to tell how much the performance improvements are really due to the method being better, versus the choice of datasets and the choice of loss functions.",
"role": "assistant"
}
]
| [
"The responses to the pre-review questions are not strong; especially w.r.t.",
"the question about dataset density and why the dataset had to be subsampled, the authors responded that subsampling is common in recommender systems work, including the papers cited.",
"This isn*t a particularly strong justification of why subsampling is a good idea, and in particular doesn*t answer the question of *how would the results look without subsampling,* which I think is a question that could easily have been answered directly.",
"Especially given that the goal of dealing with the cold-start issue is so heavily emphasized in the paper, in seems odd to sample the data to reduce sparsity.",
"Other than that, the pre-review questions seem to have been answered satisfactorily.",
"The contribution of the paper is to propose user and item embedding methods, as a means of learning complex non-linear interactions between users and items.",
"This is fairly similar to recent work on deep RS, though the network formulation has some differences.",
"Overall this is an reasonably put together paper that makes a contribution in an important area, though there are still some shortcomings that should be addressed, namely: 1) The evaluation is unusual.",
"Recall@M is the only result reported, though this is not usually an evaluation seen in recommender systems research.",
"At the very least other performance measures (rmse or AUC) should be reported for completeness, even if the results are not strong 2) Given that the contribution is fairly simple (i.e., the *standard* recommender systems task, but with a new model) it*s a shame that unusual data samples have to be taken.",
"This should be a case where it*s possible to report results against competing methods using *exactly* the same data they used, and exactly the same error measure, for the fairest comparison possible.",
"Without the above it*s hard to tell how much the performance improvements are really due to the method being better, versus the choice of datasets and the choice of loss functions."
] | {
"criticism": 6,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 9,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 5,
"suggestion_and_solution": 3,
"total": 12
} | 0.5 | 0 | 0.166667 | 0.75 | 0.083333 | 0.166667 | 0.416667 | 0.25 | 12 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.317551 | 0.015782 |
Neural Causal Regularization under the Independence of Mechanisms Assumption | Neural networks provide a powerful framework for learning the association between input and response variables and making accurate predictions. However, in many applications such as healthcare, it is important to identify causal relationships between the inputs and the response variables to be able to change the response variables by intervention on the inputs. In pursuit of models whose predictive power comes maximally from causal variables, we propose a novel causal regularizer based on the independence of mechanisms assumption. We utilize the causal regularizer to steer deep neural network architectures towards causally-interpretable solutions. We perform a large-scale analysis of electronic health records. Employing expert*s judgment as the causal ground-truth, we show that our causally-regularized algorithm outperforms its L1-regularized equivalence both in predictive performance as well as causal relevance. Finally, we show that the proposed causal regularizer can be used together with representation learning algorithms to yield up to 20% improvement in the causality score of the generated hypotheses. | This paper proposes to use a causality score to weight a sparsity regularizer. In that way, selected variables trade off between being causal and discriminative. The framework is primarily evaluated on a proprietary health dataset. While the dataset does give a good motivation to the problem setting, the paper falls a bit short for ICLR due to the lack of additional controlled experiments, relatively straightforward methodology (given the approach of Chalupka et al., arXiv Preprint, 2016, which is a more interesting paper from a technical perspective), and paucity of theoretical motivation. At the core of this paper, the approach is effectively to weight a sparsity regularizer so that *causal* variables (as determined by a separate objective) are more likely to be selected. This is generally a good idea, but we do not get a proper validation of this from the experiments as ground truth is absent. A theorem on identifiability of causal+discriminative variables from a data sample combined with adequate synthetic experiments would have probably been sufficient, for example, to push the paper towards accept from a technical perspective, but as it is, it is lacking in insight and reproducibility. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Causal Regularization under the Independence of Mechanisms Assumption\nNeural networks provide a powerful framework for learning the association between input and response variables and making accurate predictions. However, in many applications such as healthcare, it is important to identify causal relationships between the inputs and the response variables to be able to change the response variables by intervention on the inputs. In pursuit of models whose predictive power comes maximally from causal variables, we propose a novel causal regularizer based on the independence of mechanisms assumption. We utilize the causal regularizer to steer deep neural network architectures towards causally-interpretable solutions. We perform a large-scale analysis of electronic health records. Employing expert*s judgment as the causal ground-truth, we show that our causally-regularized algorithm outperforms its L1-regularized equivalence both in predictive performance as well as causal relevance. Finally, we show that the proposed causal regularizer can be used together with representation learning algorithms to yield up to 20% improvement in the causality score of the generated hypotheses.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to use a causality score to weight a sparsity regularizer. In that way, selected variables trade off between being causal and discriminative. The framework is primarily evaluated on a proprietary health dataset. While the dataset does give a good motivation to the problem setting, the paper falls a bit short for ICLR due to the lack of additional controlled experiments, relatively straightforward methodology (given the approach of Chalupka et al., arXiv Preprint, 2016, which is a more interesting paper from a technical perspective), and paucity of theoretical motivation. At the core of this paper, the approach is effectively to weight a sparsity regularizer so that *causal* variables (as determined by a separate objective) are more likely to be selected. This is generally a good idea, but we do not get a proper validation of this from the experiments as ground truth is absent. A theorem on identifiability of causal+discriminative variables from a data sample combined with adequate synthetic experiments would have probably been sufficient, for example, to push the paper towards accept from a technical perspective, but as it is, it is lacking in insight and reproducibility.",
"role": "assistant"
}
]
| [
"This paper proposes to use a causality score to weight a sparsity regularizer.",
"In that way, selected variables trade off between being causal and discriminative.",
"The framework is primarily evaluated on a proprietary health dataset.",
"While the dataset does give a good motivation to the problem setting, the paper falls a bit short for ICLR due to the lack of additional controlled experiments, relatively straightforward methodology (given the approach of Chalupka et al., arXiv Preprint, 2016, which is a more interesting paper from a technical perspective), and paucity of theoretical motivation.",
"At the core of this paper, the approach is effectively to weight a sparsity regularizer so that *causal* variables (as determined by a separate objective) are more likely to be selected.",
"This is generally a good idea, but we do not get a proper validation of this from the experiments as ground truth is absent.",
"A theorem on identifiability of causal+discriminative variables from a data sample combined with adequate synthetic experiments would have probably been sufficient, for example, to push the paper towards accept from a technical perspective, but as it is, it is lacking in insight and reproducibility."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 7
} | 0.428571 | 0 | 0.142857 | 1 | 0.285714 | 0 | 0.428571 | 0.142857 | 7 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.428571 | 1.860404 | 0.568167 |
Neural Causal Regularization under the Independence of Mechanisms Assumption | Neural networks provide a powerful framework for learning the association between input and response variables and making accurate predictions. However, in many applications such as healthcare, it is important to identify causal relationships between the inputs and the response variables to be able to change the response variables by intervention on the inputs. In pursuit of models whose predictive power comes maximally from causal variables, we propose a novel causal regularizer based on the independence of mechanisms assumption. We utilize the causal regularizer to steer deep neural network architectures towards causally-interpretable solutions. We perform a large-scale analysis of electronic health records. Employing expert*s judgment as the causal ground-truth, we show that our causally-regularized algorithm outperforms its L1-regularized equivalence both in predictive performance as well as causal relevance. Finally, we show that the proposed causal regularizer can be used together with representation learning algorithms to yield up to 20% improvement in the causality score of the generated hypotheses. | The authors extend their method of causal discovery (Chalupka et al 2016) to include assumptions about sparsity via regularization. They apply this extension to an interesting private dataset from Sutter Health. While an interesting direction, I found the presentation somewhat confused, the methodological novelty smaller than the bulk of ICLR works, and the central results (or perhaps data; see below) inadequate to address questions of causality. First, I found the presentation somewhat unclear. The paper at some points seems to be entirely focused on healthcare data, at other points it uses it as a motivating example, and at other points it is neglected. Also, algorithm 1 seems unreferenced, and I*m not entirely sure why it is needed. Figure 2 is not needed for this community. The key methodological advance in this work appears in section 2.1 (Causal regularizer), but it is introduced amidst toy examples and without clear terminology or standard methodological assumptions/build-up. In Section 3.1 (bottom of first paragraph), key data and results seem to be relegated to the appendices. Thus overall the paper read rather haphazardly. Finally, there seems to be an assumption throughout of fairly intimate familiarity with the Cholupka preprint, which i think should be avoided. This paper should stand alone. Second, while the technical contributions/novelty are not a focus of the paper*s presentation, I am concerned by the lack of methodological advance. Essentially a regularization objective is added to the previous method, which of itself is not a bad idea, but I can*t point to a technical novelty in the paper that the community can not do without. Third, fundamentally i don*t see how the experiments address the central question of causality; they show regularization behaving as expected (or rather, influencing weights as expected), but I don*t think we really have any meaningful quantitative evidence that causality has been learned. This was briefly discussed (see *ground truth causality?* and the response below). I appreciate the technical challenges/impossibility of having such a dataset, but if that*s the case, then I think this work is premature, since there is no way to really validate. Overall it*s clearly a sincere effort, but I found it wanting in terms of a few critical areas. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Causal Regularization under the Independence of Mechanisms Assumption\nNeural networks provide a powerful framework for learning the association between input and response variables and making accurate predictions. However, in many applications such as healthcare, it is important to identify causal relationships between the inputs and the response variables to be able to change the response variables by intervention on the inputs. In pursuit of models whose predictive power comes maximally from causal variables, we propose a novel causal regularizer based on the independence of mechanisms assumption. We utilize the causal regularizer to steer deep neural network architectures towards causally-interpretable solutions. We perform a large-scale analysis of electronic health records. Employing expert*s judgment as the causal ground-truth, we show that our causally-regularized algorithm outperforms its L1-regularized equivalence both in predictive performance as well as causal relevance. Finally, we show that the proposed causal regularizer can be used together with representation learning algorithms to yield up to 20% improvement in the causality score of the generated hypotheses.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors extend their method of causal discovery (Chalupka et al 2016) to include assumptions about sparsity via regularization. They apply this extension to an interesting private dataset from Sutter Health. While an interesting direction, I found the presentation somewhat confused, the methodological novelty smaller than the bulk of ICLR works, and the central results (or perhaps data; see below) inadequate to address questions of causality. First, I found the presentation somewhat unclear. The paper at some points seems to be entirely focused on healthcare data, at other points it uses it as a motivating example, and at other points it is neglected. Also, algorithm 1 seems unreferenced, and I*m not entirely sure why it is needed. Figure 2 is not needed for this community. The key methodological advance in this work appears in section 2.1 (Causal regularizer), but it is introduced amidst toy examples and without clear terminology or standard methodological assumptions/build-up. In Section 3.1 (bottom of first paragraph), key data and results seem to be relegated to the appendices. Thus overall the paper read rather haphazardly. Finally, there seems to be an assumption throughout of fairly intimate familiarity with the Cholupka preprint, which i think should be avoided. This paper should stand alone. Second, while the technical contributions/novelty are not a focus of the paper*s presentation, I am concerned by the lack of methodological advance. Essentially a regularization objective is added to the previous method, which of itself is not a bad idea, but I can*t point to a technical novelty in the paper that the community can not do without. Third, fundamentally i don*t see how the experiments address the central question of causality; they show regularization behaving as expected (or rather, influencing weights as expected), but I don*t think we really have any meaningful quantitative evidence that causality has been learned. This was briefly discussed (see *ground truth causality?* and the response below). I appreciate the technical challenges/impossibility of having such a dataset, but if that*s the case, then I think this work is premature, since there is no way to really validate. Overall it*s clearly a sincere effort, but I found it wanting in terms of a few critical areas.",
"role": "assistant"
}
]
| [
"The authors extend their method of causal discovery (Chalupka et al 2016) to include assumptions about sparsity via regularization.",
"They apply this extension to an interesting private dataset from Sutter Health.",
"While an interesting direction, I found the presentation somewhat confused, the methodological novelty smaller than the bulk of ICLR works, and the central results (or perhaps data; see below) inadequate to address questions of causality.",
"First, I found the presentation somewhat unclear.",
"The paper at some points seems to be entirely focused on healthcare data, at other points it uses it as a motivating example, and at other points it is neglected.",
"Also, algorithm 1 seems unreferenced, and I*m not entirely sure why it is needed.",
"Figure 2 is not needed for this community.",
"The key methodological advance in this work appears in section 2.1 (Causal regularizer), but it is introduced amidst toy examples and without clear terminology or standard methodological assumptions/build-up.",
"In Section 3.1 (bottom of first paragraph), key data and results seem to be relegated to the appendices.",
"Thus overall the paper read rather haphazardly.",
"Finally, there seems to be an assumption throughout of fairly intimate familiarity with the Cholupka preprint, which i think should be avoided.",
"This paper should stand alone.",
"Second, while the technical contributions/novelty are not a focus of the paper*s presentation, I am concerned by the lack of methodological advance.",
"Essentially a regularization objective is added to the previous method, which of itself is not a bad idea, but I can*t point to a technical novelty in the paper that the community can not do without.",
"Third, fundamentally i don*t see how the experiments address the central question of causality; they show regularization behaving as expected (or rather, influencing weights as expected), but I don*t think we really have any meaningful quantitative evidence that causality has been learned.",
"This was briefly discussed (see *ground truth causality?",
"* and the response below).",
"I appreciate the technical challenges/impossibility of having such a dataset, but if that*s the case, then I think this work is premature, since there is no way to really validate.",
"Overall it*s clearly a sincere effort, but I found it wanting in terms of a few critical areas."
] | {
"criticism": 11,
"example": 1,
"importance_and_relevance": 6,
"materials_and_methods": 11,
"praise": 4,
"presentation_and_reporting": 4,
"results_and_discussion": 5,
"suggestion_and_solution": 2,
"total": 19
} | 0.578947 | 0.052632 | 0.315789 | 0.578947 | 0.210526 | 0.210526 | 0.263158 | 0.105263 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.315789 | 1.747622 | 0.568167 |
L-SR1: A Second Order Optimization Method for Deep Learning | We describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep networks. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov*s Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Furthermore, we perform an experimental analysis of L-SR1 with respect to its hyper-parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks. | It is an interesting idea to go after saddle points in the optimization with an SR1 update and a good start in experiments, but missing important comparisons to recent 2nd order optimizations such as Adam, other Hessian free methods (Martens 2012), Pearlmutter fast exact multiplication by the Hessian. From the mnist/cifar curves it is not really showing an advantage to AdaDelta/Nag (although this is stated), and much more experimentation is needed to make a claim about mini-batch insensitivity to performance, can you show error rates on a larger scale task? | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nL-SR1: A Second Order Optimization Method for Deep Learning\nWe describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep networks. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov*s Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Furthermore, we perform an experimental analysis of L-SR1 with respect to its hyper-parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nIt is an interesting idea to go after saddle points in the optimization with an SR1 update and a good start in experiments, but missing important comparisons to recent 2nd order optimizations such as Adam, other Hessian free methods (Martens 2012), Pearlmutter fast exact multiplication by the Hessian. From the mnist/cifar curves it is not really showing an advantage to AdaDelta/Nag (although this is stated), and much more experimentation is needed to make a claim about mini-batch insensitivity to performance, can you show error rates on a larger scale task?",
"role": "assistant"
}
]
| [
"It is an interesting idea to go after saddle points in the optimization with an SR1 update and a good start in experiments, but missing important comparisons to recent 2nd order optimizations such as Adam, other Hessian free methods (Martens 2012), Pearlmutter fast exact multiplication by the Hessian.",
"From the mnist/cifar curves it is not really showing an advantage to AdaDelta/Nag (although this is stated), and much more experimentation is needed to make a claim about mini-batch insensitivity to performance, can you show error rates on a larger scale task?"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 2,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 2
} | 0.5 | 0 | 0.5 | 1 | 0.5 | 0 | 0 | 0.5 | 2 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 3 | 1.090326 | 1.909674 |
Leveraging Asynchronicity in Gradient Descent for Scalable Deep Learning | In this paper, we present multiple approaches for improving the performance of gradient descent when utilizing mutiple compute resources. The proposed approaches span a solution space ranging from equivalence to running on a single compute device to delaying gradient updates a fixed number of times. We present a new approach, asynchronous layer-wise gradient descent that maximizes overlap of layer-wise backpropagation (computation) with gradient synchronization (communication). This approach provides maximal theoretical equivalence to the de facto gradient descent algorithm, requires limited asynchronicity across multiple iterations of gradient descent, theoretically improves overall speedup, while minimizing the additional space requirements for asynchronicity. We implement all of our proposed approaches using Caffe – a high performance Deep Learning library – and evaluate it on both an Intel Sandy Bridge cluster connected with InfiniBand as well as an NVIDIA DGX-1 connected with NVLink. The evaluations are performed on a set of well known workloads including AlexNet and GoogleNet on the ImageNet dataset. Our evaluation of these neural network topologies indicates asynchronous gradient descent has a speedup of up to 1.7x compared to synchronous. | This paper is relatively difficult to parse. Much of the exposition of the proposed algorithm could be better presented using pseudo-code describing the compute flow, or a diagram describing exactly how the updates take place. As it stands, I*m not sure I understand everything. I would also have liked to see exactly described what the various labels in Fig 1 correspond to (*SGD task-wise, 1 comm*? Did you mean layer-wise?). There are a couple of major issues with the evaluation: first, no comparison is reported against baseline async methods such as using a parameter server. Second, using AlexNet as a benchmark is not informative at all. AlexNet looks very different from any SOTA image recognition model, and in particular it has many fewer layers, which is especially relevant to the discussion in 6.3. It also uses lots of fully-connected layers which affect the compute/communication ratios in ways that are not relevant to most interesting architectures today. | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLeveraging Asynchronicity in Gradient Descent for Scalable Deep Learning\nIn this paper, we present multiple approaches for improving the performance of gradient descent when utilizing mutiple compute resources. The proposed approaches span a solution space ranging from equivalence to running on a single compute device to delaying gradient updates a fixed number of times. We present a new approach, asynchronous layer-wise gradient descent that maximizes overlap of layer-wise backpropagation (computation) with gradient synchronization (communication). This approach provides maximal theoretical equivalence to the de facto gradient descent algorithm, requires limited asynchronicity across multiple iterations of gradient descent, theoretically improves overall speedup, while minimizing the additional space requirements for asynchronicity. We implement all of our proposed approaches using Caffe – a high performance Deep Learning library – and evaluate it on both an Intel Sandy Bridge cluster connected with InfiniBand as well as an NVIDIA DGX-1 connected with NVLink. The evaluations are performed on a set of well known workloads including AlexNet and GoogleNet on the ImageNet dataset. Our evaluation of these neural network topologies indicates asynchronous gradient descent has a speedup of up to 1.7x compared to synchronous.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper is relatively difficult to parse. Much of the exposition of the proposed algorithm could be better presented using pseudo-code describing the compute flow, or a diagram describing exactly how the updates take place. As it stands, I*m not sure I understand everything. I would also have liked to see exactly described what the various labels in Fig 1 correspond to (*SGD task-wise, 1 comm*? Did you mean layer-wise?). There are a couple of major issues with the evaluation: first, no comparison is reported against baseline async methods such as using a parameter server. Second, using AlexNet as a benchmark is not informative at all. AlexNet looks very different from any SOTA image recognition model, and in particular it has many fewer layers, which is especially relevant to the discussion in 6.3. It also uses lots of fully-connected layers which affect the compute/communication ratios in ways that are not relevant to most interesting architectures today.",
"role": "assistant"
}
]
| [
"This paper is relatively difficult to parse.",
"Much of the exposition of the proposed algorithm could be better presented using pseudo-code describing the compute flow, or a diagram describing exactly how the updates take place.",
"As it stands, I*m not sure I understand everything.",
"I would also have liked to see exactly described what the various labels in Fig 1 correspond to (*SGD task-wise, 1 comm*?",
"Did you mean layer-wise?).",
"There are a couple of major issues with the evaluation: first, no comparison is reported against baseline async methods such as using a parameter server.",
"Second, using AlexNet as a benchmark is not informative at all.",
"AlexNet looks very different from any SOTA image recognition model, and in particular it has many fewer layers, which is especially relevant to the discussion in 6.3.",
"It also uses lots of fully-connected layers which affect the compute/communication ratios in ways that are not relevant to most interesting architectures today."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 0,
"presentation_and_reporting": 4,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 9
} | 0.444444 | 0.111111 | 0.111111 | 0.555556 | 0 | 0.444444 | 0.333333 | 0.222222 | 9 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 2.222222 | 1.969703 | 0.252519 |
Dynamic Steerable Frame Networks | Filters in a convolutional network are typically parametrized in a pixel basis. As an orthonormal basis, pixels may represent any arbitrary vector in Rn. In this paper, we relax this orthonormality requirement and extend the set of viable bases to the generalized notion of frames. When applying suitable frame bases to ResNets on Cifar-10+ we demonstrate improved error rates by substitution only. By exploiting the transformation properties of such generalized bases, we arrive at steerable frames, that allow to continuously transform CNN filters under arbitrary Lie-groups. Further allowing us to locally separate pose from canonical appearance. We implement this in the Dynamic Steerable Frame Network, that dynamically estimates the transformations of filters, conditioned on its input. The derived method presents a hybrid of Dynamic Filter Networks and Spatial Transformer Networks that can be implemented in any convolutional architecture, as we illustrate in two examples. First, we illustrate estimation properties of steerable frames with a Dynamic Steerable Frame Network, compared to a Dynamic Filter Network on the task of edge detection, where we show clear advantages of the derived steerable frames. Lastly, we insert the Dynamic Steerable Frame Network as a module in a convolutional LSTM on the task of limited-data hand-gesture recognition from video and illustrate effective dynamic regularization and show clear advantages over Spatial Transformer Networks. In this paper, we have laid out the foundations of Frame-based convolutional networks and Dynamic Steerable Frame Networks while illustrating their advantages for continuously transforming features and data-efficient learning. | This works applies steerable frames for various tasks where convolutional neural networks with location invariant operators are traditionally applied. Authors provide a detailed overview of steerable frames followed with an experimental section which applies dynamic steerable network to small machine learning problems where the steerability is conceptually useful. Even though the evaluation is performed only on few small tasks, the reason why more tasks were not evaluated is that piece-wise pose invariance is needed only for a subset of tasks. The fact, that simply using overcomplete bases as a sort of *feature pre-processing* improves the results for already highly optimized ResNet and DenseNet architectures is quite interesting achievement. For the edge detection, a relatively hard baseline is selected - the Dynamic Filter Networks, which already attempts to achieve position invariant filters. The fact that DSFN improves the performance on this task verifies that regressing the parametrization of the steerable filters yields better results than regressing the filters directly. In the last experiment authors apply the network to video classification using LSTMs and they show that the improved performance is not due to increased capacity of the network. In general, it is quite interesting work. Even though it does not offer ground-breaking results (mainly in a sense of not performing experiments on larger tasks), it is theoretically interesting and shows promising results. There are few minor issues and suggestions related to the paper: * For the LSTM experiment, in order to be more exact, it would be useful to include information about total number of parameters, as the network which estimates the pose also increases the number of parameters. * Would it be possible to provide more details about how the back-propagation is done through the steerable filters? * For the Edge Detection experiment, it would be useful to provide results for some standard baseline - e.g. CNN with a similar number of parameters. Simply to see how useful it is to have location-variant filters for this task. * The last sentence in second paragraph on page 1 is missing a verb. Also it is maybe unnecessary. * The hyphenation for ConvNet is incorrect on multiple places (probably `hyphenation{Conv-Net}` would fix it). | iclr2017 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDynamic Steerable Frame Networks\nFilters in a convolutional network are typically parametrized in a pixel basis. As an orthonormal basis, pixels may represent any arbitrary vector in Rn. In this paper, we relax this orthonormality requirement and extend the set of viable bases to the generalized notion of frames. When applying suitable frame bases to ResNets on Cifar-10+ we demonstrate improved error rates by substitution only. By exploiting the transformation properties of such generalized bases, we arrive at steerable frames, that allow to continuously transform CNN filters under arbitrary Lie-groups. Further allowing us to locally separate pose from canonical appearance. We implement this in the Dynamic Steerable Frame Network, that dynamically estimates the transformations of filters, conditioned on its input. The derived method presents a hybrid of Dynamic Filter Networks and Spatial Transformer Networks that can be implemented in any convolutional architecture, as we illustrate in two examples. First, we illustrate estimation properties of steerable frames with a Dynamic Steerable Frame Network, compared to a Dynamic Filter Network on the task of edge detection, where we show clear advantages of the derived steerable frames. Lastly, we insert the Dynamic Steerable Frame Network as a module in a convolutional LSTM on the task of limited-data hand-gesture recognition from video and illustrate effective dynamic regularization and show clear advantages over Spatial Transformer Networks. In this paper, we have laid out the foundations of Frame-based convolutional networks and Dynamic Steerable Frame Networks while illustrating their advantages for continuously transforming features and data-efficient learning.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis works applies steerable frames for various tasks where convolutional neural networks with location invariant operators are traditionally applied. Authors provide a detailed overview of steerable frames followed with an experimental section which applies dynamic steerable network to small machine learning problems where the steerability is conceptually useful. Even though the evaluation is performed only on few small tasks, the reason why more tasks were not evaluated is that piece-wise pose invariance is needed only for a subset of tasks. The fact, that simply using overcomplete bases as a sort of *feature pre-processing* improves the results for already highly optimized ResNet and DenseNet architectures is quite interesting achievement. For the edge detection, a relatively hard baseline is selected - the Dynamic Filter Networks, which already attempts to achieve position invariant filters. The fact that DSFN improves the performance on this task verifies that regressing the parametrization of the steerable filters yields better results than regressing the filters directly. In the last experiment authors apply the network to video classification using LSTMs and they show that the improved performance is not due to increased capacity of the network. In general, it is quite interesting work. Even though it does not offer ground-breaking results (mainly in a sense of not performing experiments on larger tasks), it is theoretically interesting and shows promising results. There are few minor issues and suggestions related to the paper: * For the LSTM experiment, in order to be more exact, it would be useful to include information about total number of parameters, as the network which estimates the pose also increases the number of parameters. * Would it be possible to provide more details about how the back-propagation is done through the steerable filters? * For the Edge Detection experiment, it would be useful to provide results for some standard baseline - e.g. CNN with a similar number of parameters. Simply to see how useful it is to have location-variant filters for this task. * The last sentence in second paragraph on page 1 is missing a verb. Also it is maybe unnecessary. * The hyphenation for ConvNet is incorrect on multiple places (probably `hyphenation{Conv-Net}` would fix it).",
"role": "assistant"
}
]
| [
"This works applies steerable frames for various tasks where convolutional neural networks with location invariant operators are traditionally applied.",
"Authors provide a detailed overview of steerable frames followed with an experimental section which applies dynamic steerable network to small machine learning problems where the steerability is conceptually useful.",
"Even though the evaluation is performed only on few small tasks, the reason why more tasks were not evaluated is that piece-wise pose invariance is needed only for a subset of tasks.",
"The fact, that simply using overcomplete bases as a sort of *feature pre-processing* improves the results for already highly optimized ResNet and DenseNet architectures is quite interesting achievement.",
"For the edge detection, a relatively hard baseline is selected - the Dynamic Filter Networks, which already attempts to achieve position invariant filters.",
"The fact that DSFN improves the performance on this task verifies that regressing the parametrization of the steerable filters yields better results than regressing the filters directly.",
"In the last experiment authors apply the network to video classification using LSTMs and they show that the improved performance is not due to increased capacity of the network.",
"In general, it is quite interesting work.",
"Even though it does not offer ground-breaking results (mainly in a sense of not performing experiments on larger tasks), it is theoretically interesting and shows promising results.",
"There are few minor issues and suggestions related to the paper: * For the LSTM experiment, in order to be more exact, it would be useful to include information about total number of parameters, as the network which estimates the pose also increases the number of parameters.",
"* Would it be possible to provide more details about how the back-propagation is done through the steerable filters?",
"* For the Edge Detection experiment, it would be useful to provide results for some standard baseline - e.g.",
"CNN with a similar number of parameters.",
"Simply to see how useful it is to have location-variant filters for this task.",
"* The last sentence in second paragraph on page 1 is missing a verb.",
"Also it is maybe unnecessary.",
"* The hyphenation for ConvNet is incorrect on multiple places (probably `hyphenation{Conv-Net}` would fix it)."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 13,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 5,
"suggestion_and_solution": 4,
"total": 17
} | 0.176471 | 0.058824 | 0.176471 | 0.764706 | 0.176471 | 0.294118 | 0.294118 | 0.235294 | 17 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.176471 | 1.923952 | 0.252519 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.