title
stringlengths
2
153
abstract
stringlengths
0
2.15k
review
stringlengths
97
8.43k
source
stringclasses
4 values
review_src
stringclasses
1 value
thinking_trace
stringclasses
1 value
conversations
listlengths
2
2
review_sentences
sequencelengths
1
61
criteria_count
dict
criticism
float64
0
1
example
float64
0
1
importance_and_relevance
float64
0
1
materials_and_methods
float64
0
1
praise
float64
0
1
presentation_and_reporting
float64
0
1
results_and_discussion
float64
0
1
suggestion_and_solution
float64
0
1
total
int64
1
61
has_criticism
int64
0
1
has_example
int64
0
1
has_importance_and_relevance
int64
0
1
has_materials_and_methods
int64
0
1
has_praise
int64
0
1
has_presentation_and_reporting
int64
0
1
has_results_and_discussion
int64
0
1
has_suggestion_and_solution
int64
0
1
reward_value
float64
2.13
6
reward_value_length_adjusted
float64
-34.13
3.73
length_penalty
float64
0
36.4
Adding Gradient Noise Improves Learning for Very Deep Networks
Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we explore the low-overhead and easy-to-implement optimization technique of adding annealed Gaussian noise to the gradient, which we find surprisingly effective when training these very deep architectures. Unlike classical weight noise, gradient noise injection is complementary to advanced stochastic optimization algorithms such as Adam and AdaGrad. The technique not only helps to avoid overfitting, but also can result in lower training loss. We see consistent improvements in performance across an array of complex models, including state-of-the-art deep networks for question answering and algorithm learning. We observe that this optimization strategy allows a fully-connected 20-layer deep network to escape a bad initialization with standard stochastic gradient descent. We encourage further application of this technique to additional modern neural architectures.
The authors consider a simple optimization technique consisting of adding gradient noise with a specific schedule. They test their method on a number of recently proposed neural networks for simulating computer logic (end-to-end memory network, neural programmer, neural random access machines). On these networks, the question of optimization has so far not been studied as extensively as for more standard networks. A study specific to this class of models is therefore welcome. Results consistently show better optimization properties from adding noise in the training procedure. One issue with the paper is that it is not clear whether the proposed optimization strategy permits to learn actually good models, or simply better than those that do not use noise. A comparison to results obtained in the literature would be desirable. For example, in the MNIST experiments of Section 4.1, the optimization procedure reaches in the most favorable scenario an average accuracy level of approximately 92%, which is still far from having actually learned an interesting problem representation (a linear model would probably reach similar accuracy). I understand that the architecture is specially designed to be difficult to optimize (20 layers of 50 HUs), but it would have been more interesting to consider a scenario where depth is actually beneficial for solving the problem.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdding Gradient Noise Improves Learning for Very Deep Networks\nDeep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we explore the low-overhead and easy-to-implement optimization technique of adding annealed Gaussian noise to the gradient, which we find surprisingly effective when training these very deep architectures. Unlike classical weight noise, gradient noise injection is complementary to advanced stochastic optimization algorithms such as Adam and AdaGrad. The technique not only helps to avoid overfitting, but also can result in lower training loss. We see consistent improvements in performance across an array of complex models, including state-of-the-art deep networks for question answering and algorithm learning. We observe that this optimization strategy allows a fully-connected 20-layer deep network to escape a bad initialization with standard stochastic gradient descent. We encourage further application of this technique to additional modern neural architectures.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors consider a simple optimization technique consisting of adding gradient noise with a specific schedule. They test their method on a number of recently proposed neural networks for simulating computer logic (end-to-end memory network, neural programmer, neural random access machines). On these networks, the question of optimization has so far not been studied as extensively as for more standard networks. A study specific to this class of models is therefore welcome. Results consistently show better optimization properties from adding noise in the training procedure. One issue with the paper is that it is not clear whether the proposed optimization strategy permits to learn actually good models, or simply better than those that do not use noise. A comparison to results obtained in the literature would be desirable. For example, in the MNIST experiments of Section 4.1, the optimization procedure reaches in the most favorable scenario an average accuracy level of approximately 92%, which is still far from having actually learned an interesting problem representation (a linear model would probably reach similar accuracy). I understand that the architecture is specially designed to be difficult to optimize (20 layers of 50 HUs), but it would have been more interesting to consider a scenario where depth is actually beneficial for solving the problem.", "role": "assistant" } ]
[ "The authors consider a simple optimization technique consisting of adding gradient noise with a specific schedule.", "They test their method on a number of recently proposed neural networks for simulating computer logic (end-to-end memory network, neural programmer, neural random access machines).", "On these networks, the question of optimization has so far not been studied as extensively as for more standard networks.", "A study specific to this class of models is therefore welcome.", "Results consistently show better optimization properties from adding noise in the training procedure.", "One issue with the paper is that it is not clear whether the proposed optimization strategy permits to learn actually good models, or simply better than those that do not use noise.", "A comparison to results obtained in the literature would be desirable.", "For example, in the MNIST experiments of Section 4.1, the optimization procedure reaches in the most favorable scenario an average accuracy level of approximately 92%, which is still far from having actually learned an interesting problem representation (a linear model would probably reach similar accuracy).", "I understand that the architecture is specially designed to be difficult to optimize (20 layers of 50 HUs), but it would have been more interesting to consider a scenario where depth is actually beneficial for solving the problem." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 8, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 7, "suggestion_and_solution": 3, "total": 9 }
0.222222
0.111111
0.222222
0.888889
0.222222
0
0.777778
0.333333
9
1
1
1
1
1
0
1
1
2.777778
2.525259
0.252519
Adding Gradient Noise Improves Learning for Very Deep Networks
Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we explore the low-overhead and easy-to-implement optimization technique of adding annealed Gaussian noise to the gradient, which we find surprisingly effective when training these very deep architectures. Unlike classical weight noise, gradient noise injection is complementary to advanced stochastic optimization algorithms such as Adam and AdaGrad. The technique not only helps to avoid overfitting, but also can result in lower training loss. We see consistent improvements in performance across an array of complex models, including state-of-the-art deep networks for question answering and algorithm learning. We observe that this optimization strategy allows a fully-connected 20-layer deep network to escape a bad initialization with standard stochastic gradient descent. We encourage further application of this technique to additional modern neural architectures.
The authors propose to add noise to the gradients computed while optimizing deep neural networks with stochastic gradient based methods. They show results multiple data sets which indicate that the method can counteract bad parameter initialization and that it can be especially beneficial for training more complicated architectures. The method is tested on a multitude of different tasks and architectures. The results would be more convincing if they would be accompanied by confidence intervals but I understand that some of the experiments must have taken very long to run. I like that the results include both situations in which the gradient noise helps a lot and situations in which it doesn’t seem to add much to the other optimization or initialization tools employed. The quantity of the experiments and the variety of the models provide quite convincing evidence that the effect of the gradient noise generalizes to many settings. The results were not always that convincing. In Section 4.2, the method only helped significantly when a sub-optimal training scheme was used, for example. The results on MNIST are not very good compared to the state-of-the-art. Since the method is so simple, I was hoping to see more theoretical arguments for its usefulness. That said, the experimental investigations into the importance of the annealing procedure, the comparison with the effect of gradient stochasticity and the comparison with weight noise, provide some additional insight. The paper is well written and cites relevant prior work. The proposed method is described clearly and concisely, which is to be expected given its simplicity. The proposed idea is not very original. As the authors acknowledge, very similar algorithms have been used for training and it is pretty much identical to simulating Langevin dynamics but with the goal of finding a single optimum in mind rather than approximating an expected value. The work is the evaluation of an old tool in a new era where models have become bigger and more complex. Despite the lack of novelty of the method, I do think that the results are valuable. The method is so easy to implement and seems to be so useful for complicated model which are hard to initialize, that it is important for others in the field to know about it. I suspect many people will at least try the method. The variety of the architectures and tasks for which the method was useful suggests that many people may also add it to their repertoire of optimization tricks. Pros: * The idea is easy to implement. * The method is evaluated on a variety of tasks and for very different models. * Some interesting experiments which compare the method with similar approaches and investigate the importance of the annealing scheme. * The paper is well-written. Cons: * The idea is not very original. * There is no clear theoretical motivation of analysis. * Not all the results are convincing.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdding Gradient Noise Improves Learning for Very Deep Networks\nDeep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we explore the low-overhead and easy-to-implement optimization technique of adding annealed Gaussian noise to the gradient, which we find surprisingly effective when training these very deep architectures. Unlike classical weight noise, gradient noise injection is complementary to advanced stochastic optimization algorithms such as Adam and AdaGrad. The technique not only helps to avoid overfitting, but also can result in lower training loss. We see consistent improvements in performance across an array of complex models, including state-of-the-art deep networks for question answering and algorithm learning. We observe that this optimization strategy allows a fully-connected 20-layer deep network to escape a bad initialization with standard stochastic gradient descent. We encourage further application of this technique to additional modern neural architectures.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose to add noise to the gradients computed while optimizing deep neural networks with stochastic gradient based methods. They show results multiple data sets which indicate that the method can counteract bad parameter initialization and that it can be especially beneficial for training more complicated architectures. The method is tested on a multitude of different tasks and architectures. The results would be more convincing if they would be accompanied by confidence intervals but I understand that some of the experiments must have taken very long to run. I like that the results include both situations in which the gradient noise helps a lot and situations in which it doesn’t seem to add much to the other optimization or initialization tools employed. The quantity of the experiments and the variety of the models provide quite convincing evidence that the effect of the gradient noise generalizes to many settings. The results were not always that convincing. In Section 4.2, the method only helped significantly when a sub-optimal training scheme was used, for example. The results on MNIST are not very good compared to the state-of-the-art. Since the method is so simple, I was hoping to see more theoretical arguments for its usefulness. That said, the experimental investigations into the importance of the annealing procedure, the comparison with the effect of gradient stochasticity and the comparison with weight noise, provide some additional insight. The paper is well written and cites relevant prior work. The proposed method is described clearly and concisely, which is to be expected given its simplicity. The proposed idea is not very original. As the authors acknowledge, very similar algorithms have been used for training and it is pretty much identical to simulating Langevin dynamics but with the goal of finding a single optimum in mind rather than approximating an expected value. The work is the evaluation of an old tool in a new era where models have become bigger and more complex. Despite the lack of novelty of the method, I do think that the results are valuable. The method is so easy to implement and seems to be so useful for complicated model which are hard to initialize, that it is important for others in the field to know about it. I suspect many people will at least try the method. The variety of the architectures and tasks for which the method was useful suggests that many people may also add it to their repertoire of optimization tricks. Pros: * The idea is easy to implement. * The method is evaluated on a variety of tasks and for very different models. * Some interesting experiments which compare the method with similar approaches and investigate the importance of the annealing scheme. * The paper is well-written. Cons: * The idea is not very original. * There is no clear theoretical motivation of analysis. * Not all the results are convincing.", "role": "assistant" } ]
[ "The authors propose to add noise to the gradients computed while optimizing deep neural networks with stochastic gradient based methods.", "They show results multiple data sets which indicate that the method can counteract bad parameter initialization and that it can be especially beneficial for training more complicated architectures.", "The method is tested on a multitude of different tasks and architectures.", "The results would be more convincing if they would be accompanied by confidence intervals but I understand that some of the experiments must have taken very long to run.", "I like that the results include both situations in which the gradient noise helps a lot and situations in which it doesn’t seem to add much to the other optimization or initialization tools employed.", "The quantity of the experiments and the variety of the models provide quite convincing evidence that the effect of the gradient noise generalizes to many settings.", "The results were not always that convincing.", "In Section 4.2, the method only helped significantly when a sub-optimal training scheme was used, for example.", "The results on MNIST are not very good compared to the state-of-the-art.", "Since the method is so simple, I was hoping to see more theoretical arguments for its usefulness.", "That said, the experimental investigations into the importance of the annealing procedure, the comparison with the effect of gradient stochasticity and the comparison with weight noise, provide some additional insight.", "The paper is well written and cites relevant prior work.", "The proposed method is described clearly and concisely, which is to be expected given its simplicity.", "The proposed idea is not very original.", "As the authors acknowledge, very similar algorithms have been used for training and it is pretty much identical to simulating Langevin dynamics but with the goal of finding a single optimum in mind rather than approximating an expected value.", "The work is the evaluation of an old tool in a new era where models have become bigger and more complex.", "Despite the lack of novelty of the method, I do think that the results are valuable.", "The method is so easy to implement and seems to be so useful for complicated model which are hard to initialize, that it is important for others in the field to know about it.", "I suspect many people will at least try the method.", "The variety of the architectures and tasks for which the method was useful suggests that many people may also add it to their repertoire of optimization tricks.", "Pros: * The idea is easy to implement.", "* The method is evaluated on a variety of tasks and for very different models.", "* Some interesting experiments which compare the method with similar approaches and investigate the importance of the annealing scheme.", "* The paper is well-written.", "Cons: * The idea is not very original.", "* There is no clear theoretical motivation of analysis.", "* Not all the results are convincing." ]
{ "criticism": 6, "example": 1, "importance_and_relevance": 6, "materials_and_methods": 19, "praise": 9, "presentation_and_reporting": 3, "results_and_discussion": 11, "suggestion_and_solution": 4, "total": 27 }
0.222222
0.037037
0.222222
0.703704
0.333333
0.111111
0.407407
0.148148
27
1
1
1
1
1
1
1
1
2.185185
-0.908171
3.093356
The loss surface of residual networks: Ensembles and the role of batch normalization
Deep Residual Networks present a premium in performance in comparison to conventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensemble are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network’s depth, as training progresses, it becomes deeper and deeper. The main mechanism that controls the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models, which we also use in order to study the number of critical points in the optimization of Residual Networks.
This paper extend the Spin Glass analysis of Choromanska et al. (2015a) to Res Nets which yield the novel dynamic ensemble results for Res Nets and the connection to Batch Normalization and the analysis of their loss surface of Res Nets. The paper is well-written with many insightful explanation of results. Although the technical contributions extend the Spin Glass model analysis of the ones by Choromanska et al. (2015a), the updated version could eliminate one of the unrealistic assumptions and the analysis further provides novel dynamic ensemble results and the connection to Batch Normalization that gives more insightful results about the structure of Res Nets. It is essential to show this dynamic behaviour in a regime without batch normalization to untangle the normalization effect on ensemble feature. Hence authors claim that steady increase in the L_2 norm of the weights will maintain the this feature but setting for Figure 1 is restrictive to empirically support the claim. At least results on CIFAR 10 without batch normalization for showing effect of L_2 norm increase and results that support claims about Theorem 4 would strengthen the paper. This work provides an initial rigorous framework to analyze better the inherent structure of the current state of art Res Net architectures and its variants which can stimulate potentially more significant results towards careful understanding of current state of art models (Rather than always to attempting to improve the performance of Res Nets by applying intuitive incremental heuristics, it is important to progress on some solid understanding too).
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe loss surface of residual networks: Ensembles and the role of batch normalization\nDeep Residual Networks present a premium in performance in comparison to conventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensemble are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network’s depth, as training progresses, it becomes deeper and deeper. The main mechanism that controls the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models, which we also use in order to study the number of critical points in the optimization of Residual Networks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper extend the Spin Glass analysis of Choromanska et al. (2015a) to Res Nets which yield the novel dynamic ensemble results for Res Nets and the connection to Batch Normalization and the analysis of their loss surface of Res Nets. The paper is well-written with many insightful explanation of results. Although the technical contributions extend the Spin Glass model analysis of the ones by Choromanska et al. (2015a), the updated version could eliminate one of the unrealistic assumptions and the analysis further provides novel dynamic ensemble results and the connection to Batch Normalization that gives more insightful results about the structure of Res Nets. It is essential to show this dynamic behaviour in a regime without batch normalization to untangle the normalization effect on ensemble feature. Hence authors claim that steady increase in the L_2 norm of the weights will maintain the this feature but setting for Figure 1 is restrictive to empirically support the claim. At least results on CIFAR 10 without batch normalization for showing effect of L_2 norm increase and results that support claims about Theorem 4 would strengthen the paper. This work provides an initial rigorous framework to analyze better the inherent structure of the current state of art Res Net architectures and its variants which can stimulate potentially more significant results towards careful understanding of current state of art models (Rather than always to attempting to improve the performance of Res Nets by applying intuitive incremental heuristics, it is important to progress on some solid understanding too).", "role": "assistant" } ]
[ "This paper extend the Spin Glass analysis of Choromanska et al.", "(2015a) to Res Nets which yield the novel dynamic ensemble results for Res Nets and the connection to Batch Normalization and the analysis of their loss surface of Res Nets.", "The paper is well-written with many insightful explanation of results.", "Although the technical contributions extend the Spin Glass model analysis of the ones by Choromanska et al.", "(2015a), the updated version could eliminate one of the unrealistic assumptions and the analysis further provides novel dynamic ensemble results and the connection to Batch Normalization that gives more insightful results about the structure of Res Nets.", "It is essential to show this dynamic behaviour in a regime without batch normalization to untangle the normalization effect on ensemble feature.", "Hence authors claim that steady increase in the L_2 norm of the weights will maintain the this feature but setting for Figure 1 is restrictive to empirically support the claim.", "At least results on CIFAR 10 without batch normalization for showing effect of L_2 norm increase and results that support claims about Theorem 4 would strengthen the paper.", "This work provides an initial rigorous framework to analyze better the inherent structure of the current state of art Res Net architectures and its variants which can stimulate potentially more significant results towards careful understanding of current state of art models (Rather than always to attempting to improve the performance of Res Nets by applying intuitive incremental heuristics, it is important to progress on some solid understanding too)." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 8, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 7, "suggestion_and_solution": 3, "total": 9 }
0.111111
0
0.333333
0.888889
0.222222
0.111111
0.777778
0.333333
9
1
0
1
1
1
1
1
1
2.777778
2.525259
0.252519
Eigenvalues of the Hessian in Deep Learning: Singularity and Beyond
We look at the eigenvalues of the Hessian of a loss function before and after training. The eigenvalue distribution is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. We present empirical evidence for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data.
The paper analyzes the properties of the Hessian of the training objective for various neural networks and data distributions. The authors study in particular, the eigenspectrum of the Hessian, which relates to the difficulty and the local convexity of the optimization problem. While there are several interesting insights discussed in this paper such as the local flatness of the objective function, as well as the study of the relation between data distribution and Hessian, a somewhat lacking aspect of the paper is that most described effects are presented as general, while tested only in a specific setting, without control experiments, or mathematical analysis. For example, regarding the concentration of eigenvalues to zero in Figure 6, it is unclear whether the concentration effect is really caused by training (e.g. increasing insensitivity to local perturbations), or the consequence of a specific choice of scale for the initial parameters. In Figure 8, the complexity of the data is not defined. It is not clear whether two fully overlapping distributions (the Hessian would then become zero?) is considered as complex or simple data. Some of the plots legends (Fig. 1 and 2) and labels are unreadable in printed format. Plots of Figure 3 don*t have the same range for the x-axis. The image of Hessian matrix of Figure 1 does not render properly in printed format.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEigenvalues of the Hessian in Deep Learning: Singularity and Beyond\nWe look at the eigenvalues of the Hessian of a loss function before and after training. The eigenvalue distribution is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. We present empirical evidence for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper analyzes the properties of the Hessian of the training objective for various neural networks and data distributions. The authors study in particular, the eigenspectrum of the Hessian, which relates to the difficulty and the local convexity of the optimization problem. While there are several interesting insights discussed in this paper such as the local flatness of the objective function, as well as the study of the relation between data distribution and Hessian, a somewhat lacking aspect of the paper is that most described effects are presented as general, while tested only in a specific setting, without control experiments, or mathematical analysis. For example, regarding the concentration of eigenvalues to zero in Figure 6, it is unclear whether the concentration effect is really caused by training (e.g. increasing insensitivity to local perturbations), or the consequence of a specific choice of scale for the initial parameters. In Figure 8, the complexity of the data is not defined. It is not clear whether two fully overlapping distributions (the Hessian would then become zero?) is considered as complex or simple data. Some of the plots legends (Fig. 1 and 2) and labels are unreadable in printed format. Plots of Figure 3 don*t have the same range for the x-axis. The image of Hessian matrix of Figure 1 does not render properly in printed format.", "role": "assistant" } ]
[ "The paper analyzes the properties of the Hessian of the training objective for various neural networks and data distributions.", "The authors study in particular, the eigenspectrum of the Hessian, which relates to the difficulty and the local convexity of the optimization problem.", "While there are several interesting insights discussed in this paper such as the local flatness of the objective function, as well as the study of the relation between data distribution and Hessian, a somewhat lacking aspect of the paper is that most described effects are presented as general, while tested only in a specific setting, without control experiments, or mathematical analysis.", "For example, regarding the concentration of eigenvalues to zero in Figure 6, it is unclear whether the concentration effect is really caused by training (e.g.", "increasing insensitivity to local perturbations), or the consequence of a specific choice of scale for the initial parameters.", "In Figure 8, the complexity of the data is not defined.", "It is not clear whether two fully overlapping distributions (the Hessian would then become zero?)", "is considered as complex or simple data.", "Some of the plots legends (Fig.", "1 and 2) and labels are unreadable in printed format.", "Plots of Figure 3 don*t have the same range for the x-axis.", "The image of Hessian matrix of Figure 1 does not render properly in printed format." ]
{ "criticism": 6, "example": 5, "importance_and_relevance": 1, "materials_and_methods": 7, "praise": 1, "presentation_and_reporting": 4, "results_and_discussion": 4, "suggestion_and_solution": 0, "total": 12 }
0.5
0.416667
0.083333
0.583333
0.083333
0.333333
0.333333
0
12
1
1
1
1
1
1
1
0
2.333333
2.317551
0.015782
Eigenvalues of the Hessian in Deep Learning: Singularity and Beyond
We look at the eigenvalues of the Hessian of a loss function before and after training. The eigenvalue distribution is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. We present empirical evidence for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data.
Studying the Hessian in deep learning, the experiments in this paper suggest that the eigenvalue distribution is concentrated around zero and the non zero eigenvalues are related to the complexity of the input data. I find most of the discussions and experiments to be interesting and insightful. However, the current paper could be significantly improved. Quality: It seems that the arguments in the paper could be enhanced by more effort and more comprehensive experiments. Performing some of the experiments discussed in the conclusion could certainly help a lot. Some other suggestions: 1- It would be very helpful to add other plots showing the distribution of eigenvalues for some other machine learning method for the purpose of comparison to deep learning. 2- There are some issues about the scaling of the weights and it make sense to normalize the weights each time before calculating the Hessian otherwise the result might be misleading. 3- It might worth trying to find a quantity that measures the singularity of Hessian because it is difficult to visually conclude something from the plots. 4- Adding some plots for the Hessian during the optimization is definitely needed because we mostly care about the Hessian during the optimization not after the convergence. Clarity: 1- There is no reference to figures in the main text which makes it confusing for the reading to know the context for each figure. For example, when looking at Figure 1, it is not clear that the Hessian is calculated at the beginning of optimization or after convergence. 2- The texts in the figures are very small and hard to read.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEigenvalues of the Hessian in Deep Learning: Singularity and Beyond\nWe look at the eigenvalues of the Hessian of a loss function before and after training. The eigenvalue distribution is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. We present empirical evidence for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nStudying the Hessian in deep learning, the experiments in this paper suggest that the eigenvalue distribution is concentrated around zero and the non zero eigenvalues are related to the complexity of the input data. I find most of the discussions and experiments to be interesting and insightful. However, the current paper could be significantly improved. Quality: It seems that the arguments in the paper could be enhanced by more effort and more comprehensive experiments. Performing some of the experiments discussed in the conclusion could certainly help a lot. Some other suggestions: 1- It would be very helpful to add other plots showing the distribution of eigenvalues for some other machine learning method for the purpose of comparison to deep learning. 2- There are some issues about the scaling of the weights and it make sense to normalize the weights each time before calculating the Hessian otherwise the result might be misleading. 3- It might worth trying to find a quantity that measures the singularity of Hessian because it is difficult to visually conclude something from the plots. 4- Adding some plots for the Hessian during the optimization is definitely needed because we mostly care about the Hessian during the optimization not after the convergence. Clarity: 1- There is no reference to figures in the main text which makes it confusing for the reading to know the context for each figure. For example, when looking at Figure 1, it is not clear that the Hessian is calculated at the beginning of optimization or after convergence. 2- The texts in the figures are very small and hard to read.", "role": "assistant" } ]
[ "Studying the Hessian in deep learning, the experiments in this paper suggest that the eigenvalue distribution is concentrated around zero and the non zero eigenvalues are related to the complexity of the input data.", "I find most of the discussions and experiments to be interesting and insightful.", "However, the current paper could be significantly improved.", "Quality: It seems that the arguments in the paper could be enhanced by more effort and more comprehensive experiments.", "Performing some of the experiments discussed in the conclusion could certainly help a lot.", "Some other suggestions: 1- It would be very helpful to add other plots showing the distribution of eigenvalues for some other machine learning method for the purpose of comparison to deep learning.", "2- There are some issues about the scaling of the weights and it make sense to normalize the weights each time before calculating the Hessian otherwise the result might be misleading.", "3- It might worth trying to find a quantity that measures the singularity of Hessian because it is difficult to visually conclude something from the plots.", "4- Adding some plots for the Hessian during the optimization is definitely needed because we mostly care about the Hessian during the optimization not after the convergence.", "Clarity: 1- There is no reference to figures in the main text which makes it confusing for the reading to know the context for each figure.", "For example, when looking at Figure 1, it is not clear that the Hessian is calculated at the beginning of optimization or after convergence.", "2- The texts in the figures are very small and hard to read." ]
{ "criticism": 4, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 9, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 7, "suggestion_and_solution": 7, "total": 12 }
0.333333
0.083333
0.083333
0.75
0.083333
0.25
0.583333
0.583333
12
1
1
1
1
1
1
1
1
2.75
2.734218
0.015782
Exploring LOTS in Deep Neural Networks
Deep neural networks have recently demonstrated excellent performance on various tasks. Despite recent advances, our understanding of these learning models is still incomplete, at least, as their unexpected vulnerability to imperceptibly small, non-random perturbations revealed. The existence of these so-called adversarial examples presents a serious problem of the application of vulnerable machine learning models. In this paper, we introduce the layerwise origin-target synthesis (LOTS) that can serve multiple purposes. First, we can use it as a visualization technique that gives us insights into the function of any intermediate feature layer by showing the notion of a particular input in deep neural networks. Second, our approach can be applied to assess the invariance of the learned features captured at any layer with respect to the class of the particular input. Finally, we can also utilize LOTS as a general way of producing a vast amount of diverse adversarial examples that can be used for training to further improve the robustness of machine learning models and their performance as well.
The paper presents a new exciting layerwise origin-target synthesis method both for generating a large number of diverse adversarials as well as for understanding the robustness of various layers. The methodology is then used to visualize the amount of perturbation necessary for producing a change for higher level features. The approach to match the features of another unrelated image is interesting and it goes beyond producing adversarials for classification. It can also generate adversarials for face-recognition and other models where the result is matched with some instance from a database. Pro: The presented approach is definitely sound, interesting and original. Con: The analyses presented in this paper are relatively shallow and don*t touch the most obvious questions. There is not much experimental quantitative evidence for the efficacy of this method compared with other approaches to produce adversarials. The visualization is not very exciting and it is hard to any draw any meaningful conclusions from them. It would definitely improve the paper if it would present some interesting conclusions based on the new ideas.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExploring LOTS in Deep Neural Networks\nDeep neural networks have recently demonstrated excellent performance on various tasks. Despite recent advances, our understanding of these learning models is still incomplete, at least, as their unexpected vulnerability to imperceptibly small, non-random perturbations revealed. The existence of these so-called adversarial examples presents a serious problem of the application of vulnerable machine learning models. In this paper, we introduce the layerwise origin-target synthesis (LOTS) that can serve multiple purposes. First, we can use it as a visualization technique that gives us insights into the function of any intermediate feature layer by showing the notion of a particular input in deep neural networks. Second, our approach can be applied to assess the invariance of the learned features captured at any layer with respect to the class of the particular input. Finally, we can also utilize LOTS as a general way of producing a vast amount of diverse adversarial examples that can be used for training to further improve the robustness of machine learning models and their performance as well.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents a new exciting layerwise origin-target synthesis method both for generating a large number of diverse adversarials as well as for understanding the robustness of various layers. The methodology is then used to visualize the amount of perturbation necessary for producing a change for higher level features. The approach to match the features of another unrelated image is interesting and it goes beyond producing adversarials for classification. It can also generate adversarials for face-recognition and other models where the result is matched with some instance from a database. Pro: The presented approach is definitely sound, interesting and original. Con: The analyses presented in this paper are relatively shallow and don*t touch the most obvious questions. There is not much experimental quantitative evidence for the efficacy of this method compared with other approaches to produce adversarials. The visualization is not very exciting and it is hard to any draw any meaningful conclusions from them. It would definitely improve the paper if it would present some interesting conclusions based on the new ideas.", "role": "assistant" } ]
[ "The paper presents a new exciting layerwise origin-target synthesis method both for generating a large number of diverse adversarials as well as for understanding the robustness of various layers.", "The methodology is then used to visualize the amount of perturbation necessary for producing a change for higher level features.", "The approach to match the features of another unrelated image is interesting and it goes beyond producing adversarials for classification.", "It can also generate adversarials for face-recognition and other models where the result is matched with some instance from a database.", "Pro: The presented approach is definitely sound, interesting and original.", "Con: The analyses presented in this paper are relatively shallow and don*t touch the most obvious questions.", "There is not much experimental quantitative evidence for the efficacy of this method compared with other approaches to produce adversarials.", "The visualization is not very exciting and it is hard to any draw any meaningful conclusions from them.", "It would definitely improve the paper if it would present some interesting conclusions based on the new ideas." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 6, "praise": 4, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 9 }
0.333333
0
0.555556
0.666667
0.444444
0
0.333333
0.111111
9
1
0
1
1
1
0
1
1
2.444444
2.191926
0.252519
Warped Convolutions: Efficient Invariance to Spatial Transformations
Convolutional Neural Networks (CNNs) are extremely efficient, since they exploit the inherent translation-invariance of natural images. However, translation is just one of a myriad of useful spatial transformations. Can the same efficiency be attained when considering other spatial invariances? Such generalized convolutions have been considered in the past, but at a high computational cost. We present a construction that is simple and exact, yet has the same computational complexity that standard convolutions enjoy. It consists of a constant image warp followed by a simple convolution, which are standard blocks in deep learning toolboxes. With a carefully crafted warp, the resulting architecture can be made equivariant to a wide range of 2-parameters spatial transformations. We show encouraging results in realistic scenarios, including the estimation of vehicle poses in the Google Earth dataset (rotation and scale), and face poses in Annotated Facial Landmarks in the Wild (3D rotations under perspective).
This paper deals with convolution along Lie groups. Pros: - Good numerical results - Interesting applications linked to convolutions on Lie groups Cons: - Sometimes, the writing is not clear/poor. - The theorems 1 and 2 are in fact direct applications of Lie group theory, but this is not made explicit. I do not detail more my review, as I believe my questions/answers complete it fairly. However, if the authors require it, I can be more specific. I think the writing should be improved a bit.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nWarped Convolutions: Efficient Invariance to Spatial Transformations\nConvolutional Neural Networks (CNNs) are extremely efficient, since they exploit the inherent translation-invariance of natural images. However, translation is just one of a myriad of useful spatial transformations. Can the same efficiency be attained when considering other spatial invariances? Such generalized convolutions have been considered in the past, but at a high computational cost. We present a construction that is simple and exact, yet has the same computational complexity that standard convolutions enjoy. It consists of a constant image warp followed by a simple convolution, which are standard blocks in deep learning toolboxes. With a carefully crafted warp, the resulting architecture can be made equivariant to a wide range of 2-parameters spatial transformations. We show encouraging results in realistic scenarios, including the estimation of vehicle poses in the Google Earth dataset (rotation and scale), and face poses in Annotated Facial Landmarks in the Wild (3D rotations under perspective).", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper deals with convolution along Lie groups. Pros: - Good numerical results - Interesting applications linked to convolutions on Lie groups Cons: - Sometimes, the writing is not clear/poor. - The theorems 1 and 2 are in fact direct applications of Lie group theory, but this is not made explicit. I do not detail more my review, as I believe my questions/answers complete it fairly. However, if the authors require it, I can be more specific. I think the writing should be improved a bit.", "role": "assistant" } ]
[ "This paper deals with convolution along Lie groups.", "Pros: - Good numerical results - Interesting applications linked to convolutions on Lie groups Cons: - Sometimes, the writing is not clear/poor.", "- The theorems 1 and 2 are in fact direct applications of Lie group theory, but this is not made explicit.", "I do not detail more my review, as I believe my questions/answers complete it fairly.", "However, if the authors require it, I can be more specific.", "I think the writing should be improved a bit." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 0, "praise": 1, "presentation_and_reporting": 4, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 6 }
0.5
0
0.333333
0
0.166667
0.666667
0.166667
0.333333
6
1
0
1
0
1
1
1
1
2.166667
1.393328
0.773339
Adjusting for Dropout Variance in Batch Normalization and Weight Initialization
We show how to adjust for the variance introduced by dropout with corrections to weight initialization and Batch Normalization, yielding higher accuracy. Though dropout can preserve the expected input to a neuron between train and test, the variance of the input differs. We thus propose a new weight initialization by correcting for the influence of dropout rates and an arbitrary nonlinearity*s influence on variance through simple corrective scalars. Since Batch Normalization trained with dropout estimates the variance of a layer*s incoming distribution with some inputs dropped, the variance also differs between train and test. After training a network with Batch Normalization and dropout, we simply update Batch Normalization*s variance moving averages with dropout off and obtain state of the art on CIFAR-10 and CIFAR-100 without data augmentation.
The paper presents an approach for compensating the input/activation variance introduced by dropout in a network. Additionally, a practical inference trick of re-estimating the batch normalization parameters with dropout turned off before testing. The authors very well show how dropout influences the input/activation variance and then scale the initial weights accordingly to achieve unit variance which helps in avoiding activation outputs exploding or vanishing. It is shown that the presented approach serves as a good initialization technique for deep networks and results in performances op par or slightly better than the existing approaches. The limited experimental validation and only small difference in accuracies compared to existing methods makes it difficult to judge the effectiveness of presented approach. Perhaps observing the statistics of output activations and gradients over training epochs in multiple experiments can better support the argument of stability of the network using proposed approach. Authors might consider adding some validation for considering the backpropagation variance. On multiple occasions comparison is drawn against batch normalization which I believe does much more than a weight initialization technique. The presented approach is a good initialization technique just not sure if its better than existing ones.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAdjusting for Dropout Variance in Batch Normalization and Weight Initialization\nWe show how to adjust for the variance introduced by dropout with corrections to weight initialization and Batch Normalization, yielding higher accuracy. Though dropout can preserve the expected input to a neuron between train and test, the variance of the input differs. We thus propose a new weight initialization by correcting for the influence of dropout rates and an arbitrary nonlinearity*s influence on variance through simple corrective scalars. Since Batch Normalization trained with dropout estimates the variance of a layer*s incoming distribution with some inputs dropped, the variance also differs between train and test. After training a network with Batch Normalization and dropout, we simply update Batch Normalization*s variance moving averages with dropout off and obtain state of the art on CIFAR-10 and CIFAR-100 without data augmentation.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents an approach for compensating the input/activation variance introduced by dropout in a network. Additionally, a practical inference trick of re-estimating the batch normalization parameters with dropout turned off before testing. The authors very well show how dropout influences the input/activation variance and then scale the initial weights accordingly to achieve unit variance which helps in avoiding activation outputs exploding or vanishing. It is shown that the presented approach serves as a good initialization technique for deep networks and results in performances op par or slightly better than the existing approaches. The limited experimental validation and only small difference in accuracies compared to existing methods makes it difficult to judge the effectiveness of presented approach. Perhaps observing the statistics of output activations and gradients over training epochs in multiple experiments can better support the argument of stability of the network using proposed approach. Authors might consider adding some validation for considering the backpropagation variance. On multiple occasions comparison is drawn against batch normalization which I believe does much more than a weight initialization technique. The presented approach is a good initialization technique just not sure if its better than existing ones.", "role": "assistant" } ]
[ "The paper presents an approach for compensating the input/activation variance introduced by dropout in a network.", "Additionally, a practical inference trick of re-estimating the batch normalization parameters with dropout turned off before testing.", "The authors very well show how dropout influences the input/activation variance and then scale the initial weights accordingly to achieve unit variance which helps in avoiding activation outputs exploding or vanishing.", "It is shown that the presented approach serves as a good initialization technique for deep networks and results in performances op par or slightly better than the existing approaches.", "The limited experimental validation and only small difference in accuracies compared to existing methods makes it difficult to judge the effectiveness of presented approach.", "Perhaps observing the statistics of output activations and gradients over training epochs in multiple experiments can better support the argument of stability of the network using proposed approach.", "Authors might consider adding some validation for considering the backpropagation variance.", "On multiple occasions comparison is drawn against batch normalization which I believe does much more than a weight initialization technique.", "The presented approach is a good initialization technique just not sure if its better than existing ones." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 9, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 6, "suggestion_and_solution": 2, "total": 9 }
0.222222
0
0
1
0.222222
0
0.666667
0.222222
9
1
0
0
1
1
0
1
1
2.333333
2.080814
0.252519
The Power of Sparsity in Convolutional Neural Networks
Deep convolutional networks are well-known for their high computational and memory demands. Given limited resources, how does one design a network that balances its size, training time, and prediction accuracy? A surprisingly effective approach to trade accuracy for size and speed is to simply reduce the number of channels in each convolutional layer by a fixed fraction and retrain the network. In many cases this leads to significantly smaller networks with only minimal changes to accuracy. In this paper, we take a step further by empirically examining a strategy for deactivating connections between filters in convolutional layers in a way that allows us to harvest savings both in run-time and memory for many network architectures. More specifically, we generalize 2D convolution to use a channel-wise sparse connection structure and show that this leads to significantly better results than the baseline approach for large networks including VGG and Inception V3.
The paper experiments with channel to channel sparse neural networks. The paper is well written and the analysis is useful. The sparse connection is not new but has not been experimented on large-scale problems like ImageNet. One of the reasons for that is the unavailability of fast implementations of randomly connected convolutional layers. The results displayed in figures 2, 3, 4, and, 5 show that sparse connections need the same number of parameters as the dense networks to reach to the best performance on the given tasks, but can provide better performance when there is a limited budget for the #parameters and #multiplyAdds. This paper is definitely informative but it does not reach to the conference acceptance level, simply because the idea is not new, the sparse connection implementation is poor, and the results are not very surprising.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Power of Sparsity in Convolutional Neural Networks\nDeep convolutional networks are well-known for their high computational and memory demands. Given limited resources, how does one design a network that balances its size, training time, and prediction accuracy? A surprisingly effective approach to trade accuracy for size and speed is to simply reduce the number of channels in each convolutional layer by a fixed fraction and retrain the network. In many cases this leads to significantly smaller networks with only minimal changes to accuracy. In this paper, we take a step further by empirically examining a strategy for deactivating connections between filters in convolutional layers in a way that allows us to harvest savings both in run-time and memory for many network architectures. More specifically, we generalize 2D convolution to use a channel-wise sparse connection structure and show that this leads to significantly better results than the baseline approach for large networks including VGG and Inception V3.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper experiments with channel to channel sparse neural networks. The paper is well written and the analysis is useful. The sparse connection is not new but has not been experimented on large-scale problems like ImageNet. One of the reasons for that is the unavailability of fast implementations of randomly connected convolutional layers. The results displayed in figures 2, 3, 4, and, 5 show that sparse connections need the same number of parameters as the dense networks to reach to the best performance on the given tasks, but can provide better performance when there is a limited budget for the #parameters and #multiplyAdds. This paper is definitely informative but it does not reach to the conference acceptance level, simply because the idea is not new, the sparse connection implementation is poor, and the results are not very surprising.", "role": "assistant" } ]
[ "The paper experiments with channel to channel sparse neural networks.", "The paper is well written and the analysis is useful.", "The sparse connection is not new but has not been experimented on large-scale problems like ImageNet.", "One of the reasons for that is the unavailability of fast implementations of randomly connected convolutional layers.", "The results displayed in figures 2, 3, 4, and, 5 show that sparse connections need the same number of parameters as the dense networks to reach to the best performance on the given tasks, but can provide better performance when there is a limited budget for the #parameters and #multiplyAdds.", "This paper is definitely informative but it does not reach to the conference acceptance level, simply because the idea is not new, the sparse connection implementation is poor, and the results are not very surprising." ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 6 }
0.5
0.166667
0.333333
0.833333
0.333333
0.166667
0.333333
0.166667
6
1
1
1
1
1
1
1
1
2.833333
2.059994
0.773339
Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context
Word embeddings, which represent a word as a point in a vector space, have become ubiquitous to several NLP tasks. A recent line of work uses bilingual (two languages) corpora to learn a different vector for each sense of a word, by exploiting crosslingual signals to aid sense identification. We present a multi-view Bayesian non-parametric algorithm which improves multi-sense word embeddings by (a) using multilingual (i.e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner. Ours is the first approach with the ability to leverage multilingual corpora efficiently for multi-sense representation learning. Experiments show that multilingual training significantly improves performance over monolingual and bilingual training, by allowing us to combine different parallel corpora to leverage multilingual context. Multilingual training yields com- parable performance to a state of the art monolingual model trained on five times more training data.
This paper discusses multi-sense embedddings and proposes learning those by using aligned text across languages. Further, the paper suggests that adding more languages helps improve word sense disambiguation (as some ambiguities can be carried across language pairs). While this idea in itself isn*t new, the authors propose a particular setup for learning multi-sense embeddings by exploiting multilingual data. Broadly this is fine, but unfortunately the paper then falls short in a number of ways. For one, the model section is unnecessarily convoluted for what is a nice idea that could be described in a far more concise fashion. Next (and more importantly), comparison with other work is lacking to such an extent that it is impossible to evaluate the merits of the proposed model in an objective fashion. This paper could be a lot stronger if the learned embeddings were evaluated in downstream tasks and evaluated against other published methods. In the current version there is too little of this, leaving us with mostly relative results between model variants and t-SNE plots that don*t really add anything to the story.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBeyond Bilingual: Multi-sense Word Embeddings using Multilingual Context\nWord embeddings, which represent a word as a point in a vector space, have become ubiquitous to several NLP tasks. A recent line of work uses bilingual (two languages) corpora to learn a different vector for each sense of a word, by exploiting crosslingual signals to aid sense identification. We present a multi-view Bayesian non-parametric algorithm which improves multi-sense word embeddings by (a) using multilingual (i.e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner. Ours is the first approach with the ability to leverage multilingual corpora efficiently for multi-sense representation learning. Experiments show that multilingual training significantly improves performance over monolingual and bilingual training, by allowing us to combine different parallel corpora to leverage multilingual context. Multilingual training yields com- parable performance to a state of the art monolingual model trained on five times more training data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper discusses multi-sense embedddings and proposes learning those by using aligned text across languages. Further, the paper suggests that adding more languages helps improve word sense disambiguation (as some ambiguities can be carried across language pairs). While this idea in itself isn*t new, the authors propose a particular setup for learning multi-sense embeddings by exploiting multilingual data. Broadly this is fine, but unfortunately the paper then falls short in a number of ways. For one, the model section is unnecessarily convoluted for what is a nice idea that could be described in a far more concise fashion. Next (and more importantly), comparison with other work is lacking to such an extent that it is impossible to evaluate the merits of the proposed model in an objective fashion. This paper could be a lot stronger if the learned embeddings were evaluated in downstream tasks and evaluated against other published methods. In the current version there is too little of this, leaving us with mostly relative results between model variants and t-SNE plots that don*t really add anything to the story.", "role": "assistant" } ]
[ "This paper discusses multi-sense embedddings and proposes learning those by using aligned text across languages.", "Further, the paper suggests that adding more languages helps improve word sense disambiguation (as some ambiguities can be carried across language pairs).", "While this idea in itself isn*t new, the authors propose a particular setup for learning multi-sense embeddings by exploiting multilingual data.", "Broadly this is fine, but unfortunately the paper then falls short in a number of ways.", "For one, the model section is unnecessarily convoluted for what is a nice idea that could be described in a far more concise fashion.", "Next (and more importantly), comparison with other work is lacking to such an extent that it is impossible to evaluate the merits of the proposed model in an objective fashion.", "This paper could be a lot stronger if the learned embeddings were evaluated in downstream tasks and evaluated against other published methods.", "In the current version there is too little of this, leaving us with mostly relative results between model variants and t-SNE plots that don*t really add anything to the story." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 8 }
0.5
0
0.25
0.625
0.25
0.375
0.125
0.25
8
1
0
1
1
1
1
1
1
2.375
1.980439
0.394561
Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context
Word embeddings, which represent a word as a point in a vector space, have become ubiquitous to several NLP tasks. A recent line of work uses bilingual (two languages) corpora to learn a different vector for each sense of a word, by exploiting crosslingual signals to aid sense identification. We present a multi-view Bayesian non-parametric algorithm which improves multi-sense word embeddings by (a) using multilingual (i.e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner. Ours is the first approach with the ability to leverage multilingual corpora efficiently for multi-sense representation learning. Experiments show that multilingual training significantly improves performance over monolingual and bilingual training, by allowing us to combine different parallel corpora to leverage multilingual context. Multilingual training yields com- parable performance to a state of the art monolingual model trained on five times more training data.
this work aims to address representation of multi-sense words by exploiting multilingual context. Experiments on word sense induction and word similarity in context show that the proposed solution improves over the baseline. From a computational linguistics perspective, the fact that languages less similar to English help more is intriguing. I see following problem with this work: - the paper is hard to follow and hard to see what*s new compared to the baseline model [1]. A paragraph of discussion should clearly compare and contrast with that work. - the proposed model is a slight variation of the previous work [1] thus the experimental setup should be designed in a way so that we compare which part helps improvement and how much. thus MONO has not been exposed the same training data and we can*t be sure that the proposed model is better because MONO does not observe the data or lacks the computational power. I suggest following baseline: turning multilingual data to monolingual one using the alignment, then train the baseline model[1] on this pseudo monolingual data. - the paper provides good benchmarks for intrinsic evaluation but the message could be conveyed more strongly if we see improvement in a downstream task. [1] https://arxiv.org/pdf/1502.07257v2.pdf
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBeyond Bilingual: Multi-sense Word Embeddings using Multilingual Context\nWord embeddings, which represent a word as a point in a vector space, have become ubiquitous to several NLP tasks. A recent line of work uses bilingual (two languages) corpora to learn a different vector for each sense of a word, by exploiting crosslingual signals to aid sense identification. We present a multi-view Bayesian non-parametric algorithm which improves multi-sense word embeddings by (a) using multilingual (i.e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner. Ours is the first approach with the ability to leverage multilingual corpora efficiently for multi-sense representation learning. Experiments show that multilingual training significantly improves performance over monolingual and bilingual training, by allowing us to combine different parallel corpora to leverage multilingual context. Multilingual training yields com- parable performance to a state of the art monolingual model trained on five times more training data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nthis work aims to address representation of multi-sense words by exploiting multilingual context. Experiments on word sense induction and word similarity in context show that the proposed solution improves over the baseline. From a computational linguistics perspective, the fact that languages less similar to English help more is intriguing. I see following problem with this work: - the paper is hard to follow and hard to see what*s new compared to the baseline model [1]. A paragraph of discussion should clearly compare and contrast with that work. - the proposed model is a slight variation of the previous work [1] thus the experimental setup should be designed in a way so that we compare which part helps improvement and how much. thus MONO has not been exposed the same training data and we can*t be sure that the proposed model is better because MONO does not observe the data or lacks the computational power. I suggest following baseline: turning multilingual data to monolingual one using the alignment, then train the baseline model[1] on this pseudo monolingual data. - the paper provides good benchmarks for intrinsic evaluation but the message could be conveyed more strongly if we see improvement in a downstream task. [1] https://arxiv.org/pdf/1502.07257v2.pdf", "role": "assistant" } ]
[ "this work aims to address representation of multi-sense words by exploiting multilingual context.", "Experiments on word sense induction and word similarity in context show that the proposed solution improves over the baseline.", "From a computational linguistics perspective, the fact that languages less similar to English help more is intriguing.", "I see following problem with this work: - the paper is hard to follow and hard to see what*s new compared to the baseline model [1].", "A paragraph of discussion should clearly compare and contrast with that work.", "- the proposed model is a slight variation of the previous work [1] thus the experimental setup should be designed in a way so that we compare which part helps improvement and how much.", "thus MONO has not been exposed the same training data and we can*t be sure that the proposed model is better because MONO does not observe the data or lacks the computational power.", "I suggest following baseline: turning multilingual data to monolingual one using the alignment, then train the baseline model[1] on this pseudo monolingual data.", "- the paper provides good benchmarks for intrinsic evaluation but the message could be conveyed more strongly if we see improvement in a downstream task.", "[1] https://arxiv.org/pdf/1502.07257v2.pdf" ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 6, "praise": 2, "presentation_and_reporting": 5, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 10 }
0.2
0
0.1
0.6
0.2
0.5
0.3
0.4
10
1
0
1
1
1
1
1
1
2.3
2.157958
0.142042
NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD
In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model’s architecture. We propose a novel online dictionary-learning (sparse-coding) framework which incorporates the addition and deletion of hidden units (dictionary elements), and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with improved cognitive function and adaptation to new environments. In the online learning setting, where new input instances arrive sequentially in batches, the “neuronal birth” is implemented by adding new units with random initial weights (random dictionary elements); the number of new units is determined by the current performance (representation error) of the dictionary, higher error causing an increase in the birth rate. “Neuronal death” is implemented by imposing l1/l2-regularization (group sparsity) on the dictionary within the block-coordinate descent optimization at each iteration of our online alternating minimization scheme, which iterates between the code and dictionary updates. Finally, hidden unit connectivity adaptation is facilitated by introducing sparsity in dictionary elements. Our empirical evaluation on several real-life datasets (images and language) as well as on synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art fixed-size (nonadaptive) online sparse coding of Mairal et al. (2009) in the presence of nonstationary data. Moreover, we identify certain properties of the data (e.g., sparse inputs with nearly non-overlapping supports) and of the model (e.g., dictionary sparsity) associated with such improvements.
The authors propose a simple modification of online dictionary learning: inspired by neurogenesis, they propose to add steps of atom addition, or atom deletion, in order to extent the online dictionary learning algorithm algorithm of Mairal et al. Such extensions helps to adapt the dictionary to changing properties of the data. The online adaptation is very interesting, even if it is quite simple. The overall algorithm is quite reasonable, but not always described in sufficient details: for example, the thresholds or conditions for neuronal birth or death are not supported by a strong analysis, even if the resulting algorithm seems to perform well on quite extensive experiments. The overall idea is nevertheless interesting (even if not completely new), and the paper generally well written and pretty easy to follow. The analysis is however quite minimal: it could have been interesting to study the evolving properties of the dictionary, to analyse its accuracy for following the changes in the data, etc. Still: this is a nice work!
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD\nIn this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model’s architecture. We propose a novel online dictionary-learning (sparse-coding) framework which incorporates the addition and deletion of hidden units (dictionary elements), and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with improved cognitive function and adaptation to new environments. In the online learning setting, where new input instances arrive sequentially in batches, the “neuronal birth” is implemented by adding new units with random initial weights (random dictionary elements); the number of new units is determined by the current performance (representation error) of the dictionary, higher error causing an increase in the birth rate. “Neuronal death” is implemented by imposing l1/l2-regularization (group sparsity) on the dictionary within the block-coordinate descent optimization at each iteration of our online alternating minimization scheme, which iterates between the code and dictionary updates. Finally, hidden unit connectivity adaptation is facilitated by introducing sparsity in dictionary elements. Our empirical evaluation on several real-life datasets (images and language) as well as on synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art fixed-size (nonadaptive) online sparse coding of Mairal et al. (2009) in the presence of nonstationary data. Moreover, we identify certain properties of the data (e.g., sparse inputs with nearly non-overlapping supports) and of the model (e.g., dictionary sparsity) associated with such improvements.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a simple modification of online dictionary learning: inspired by neurogenesis, they propose to add steps of atom addition, or atom deletion, in order to extent the online dictionary learning algorithm algorithm of Mairal et al. Such extensions helps to adapt the dictionary to changing properties of the data. The online adaptation is very interesting, even if it is quite simple. The overall algorithm is quite reasonable, but not always described in sufficient details: for example, the thresholds or conditions for neuronal birth or death are not supported by a strong analysis, even if the resulting algorithm seems to perform well on quite extensive experiments. The overall idea is nevertheless interesting (even if not completely new), and the paper generally well written and pretty easy to follow. The analysis is however quite minimal: it could have been interesting to study the evolving properties of the dictionary, to analyse its accuracy for following the changes in the data, etc. Still: this is a nice work!", "role": "assistant" } ]
[ "The authors propose a simple modification of online dictionary learning: inspired by neurogenesis, they propose to add steps of atom addition, or atom deletion, in order to extent the online dictionary learning algorithm algorithm of Mairal et al.", "Such extensions helps to adapt the dictionary to changing properties of the data.", "The online adaptation is very interesting, even if it is quite simple.", "The overall algorithm is quite reasonable, but not always described in sufficient details: for example, the thresholds or conditions for neuronal birth or death are not supported by a strong analysis, even if the resulting algorithm seems to perform well on quite extensive experiments.", "The overall idea is nevertheless interesting (even if not completely new), and the paper generally well written and pretty easy to follow.", "The analysis is however quite minimal: it could have been interesting to study the evolving properties of the dictionary, to analyse its accuracy for following the changes in the data, etc.", "Still: this is a nice work!" ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 2, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 3, "total": 7 }
0.285714
0
0.428571
0.285714
0.428571
0.428571
0.142857
0.428571
7
1
0
1
1
1
1
1
1
2.428571
1.860404
0.568167
Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models
Neural sequence models are widely used to model time-series data. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top B candidates. This tends to result in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing a diversity-augmented objective. We observe that our method not only improved diversity but also finds better top 1 solutions by controlling for the exploration and exploitation of the search space. Moreover, these gains are achieved with minimal computational or memory overhead com- pared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation, conversation and visual question generation using both standard quantitative metrics and qualitative human studies. We find that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.
The paper addresses an important problem - namely on how to improve diversity in responses. It is applaudable that the authors show results on several tasks showing the applicability across different problems. In my view there are two weaknesses at this point 1) the improvements (for essentially all tasks) seem rather minor and do not really fit the overall claim of the paper 2) the approach seems quite ad hoc and it unclear to me if this is something that will and should be widely adopted. Having said this the gist of the proposed solution seems interesting but somewhat premature.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDiverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models\nNeural sequence models are widely used to model time-series data. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top B candidates. This tends to result in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing a diversity-augmented objective. We observe that our method not only improved diversity but also finds better top 1 solutions by controlling for the exploration and exploitation of the search space. Moreover, these gains are achieved with minimal computational or memory overhead com- pared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation, conversation and visual question generation using both standard quantitative metrics and qualitative human studies. We find that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper addresses an important problem - namely on how to improve diversity in responses. It is applaudable that the authors show results on several tasks showing the applicability across different problems. In my view there are two weaknesses at this point 1) the improvements (for essentially all tasks) seem rather minor and do not really fit the overall claim of the paper 2) the approach seems quite ad hoc and it unclear to me if this is something that will and should be widely adopted. Having said this the gist of the proposed solution seems interesting but somewhat premature.", "role": "assistant" } ]
[ "The paper addresses an important problem - namely on how to improve diversity in responses.", "It is applaudable that the authors show results on several tasks showing the applicability across different problems.", "In my view there are two weaknesses at this point 1) the improvements (for essentially all tasks) seem rather minor and do not really fit the overall claim of the paper 2) the approach seems quite ad hoc and it unclear to me if this is something that will and should be widely adopted.", "Having said this the gist of the proposed solution seems interesting but somewhat premature." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 1, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 2, "suggestion_and_solution": 0, "total": 4 }
0.5
0
0.5
0.25
0.5
0
0.5
0
4
1
0
1
1
1
0
1
0
2.25
0.971623
1.278377
Deep Generalized Canonical Correlation Analysis
We present Deep Generalized Canonical Correlation Analysis (DGCCA) – a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other. While methods for nonlinear two-view representation learning (Deep CCA, (Andrew et al., 2013)) and linear many-view representation learning (Generalized CCA (Horst, 1961)) exist, DGCCA is the first CCA-style multiview representation learning technique that combines the flexibility of nonlinear (deep) representation learning with the statistical power of incorporating information from many independent sources, or views. We present the DGCCA formulation as well as an efficient stochastic optimization algorithm for solving it. We learn DGCCA representations on two distinct datasets for three downstream tasks: phonetic transcription from acoustic and articulatory measurements, and recommending hashtags and friends on a dataset of Twitter users. We find that DGCCA representations soundly beat existing methods at phonetic transcription and hashtag recommendation, and in general perform no worse than standard linear many-view techniques.
The proposed method is simple and elegant; it builds upon the huge success of gradient based optimization for deep non-linear function approximators and combines it with established (linear) many-view CCA methods. A major contribution of this paper is the derivation of the gradients with respect to the non-linear encoding networks which project the different views into a common space. The derivation seems correct. In general this approach seems very interesting and I could imagine that it might be applicable to many other similarly structured problems. The paper is well written; but it could be enhanced with an explicit description of the complete algorithm which also highlights how the joint embeddings G and U are updated. I don’t have prior experience with CCA-style many-view techniques and it is therefore hard for me to judge the practical/empirical progress presented here. But the experiments seem reasonable convincing; although generally only performed on small and medium sized datasets. Detailed comments: The colours or the sign of the x-axis in figure 3b seem to be flipped compared to figure 4. It would be nice to additionally see a continuous (rainbow-coloured) version for Figures 2, 3 and 4 to better identify neighbouring datapoints; but more importantly: I’d like to see how the average reconstruction error between the individual network outputs and the learned representation develop during training. Is the mismatch between different views on a validation/test-set a useful metric for cross validation? In general, it seems the method is sensitive to regularization and hyperparameter selection (because it has many more parameters compared to GCCA and different regularization parameters have been chosen for different views) and I wonder if there is a clear metric to optimize these.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Generalized Canonical Correlation Analysis\nWe present Deep Generalized Canonical Correlation Analysis (DGCCA) – a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other. While methods for nonlinear two-view representation learning (Deep CCA, (Andrew et al., 2013)) and linear many-view representation learning (Generalized CCA (Horst, 1961)) exist, DGCCA is the first CCA-style multiview representation learning technique that combines the flexibility of nonlinear (deep) representation learning with the statistical power of incorporating information from many independent sources, or views. We present the DGCCA formulation as well as an efficient stochastic optimization algorithm for solving it. We learn DGCCA representations on two distinct datasets for three downstream tasks: phonetic transcription from acoustic and articulatory measurements, and recommending hashtags and friends on a dataset of Twitter users. We find that DGCCA representations soundly beat existing methods at phonetic transcription and hashtag recommendation, and in general perform no worse than standard linear many-view techniques.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe proposed method is simple and elegant; it builds upon the huge success of gradient based optimization for deep non-linear function approximators and combines it with established (linear) many-view CCA methods. A major contribution of this paper is the derivation of the gradients with respect to the non-linear encoding networks which project the different views into a common space. The derivation seems correct. In general this approach seems very interesting and I could imagine that it might be applicable to many other similarly structured problems. The paper is well written; but it could be enhanced with an explicit description of the complete algorithm which also highlights how the joint embeddings G and U are updated. I don’t have prior experience with CCA-style many-view techniques and it is therefore hard for me to judge the practical/empirical progress presented here. But the experiments seem reasonable convincing; although generally only performed on small and medium sized datasets. Detailed comments: The colours or the sign of the x-axis in figure 3b seem to be flipped compared to figure 4. It would be nice to additionally see a continuous (rainbow-coloured) version for Figures 2, 3 and 4 to better identify neighbouring datapoints; but more importantly: I’d like to see how the average reconstruction error between the individual network outputs and the learned representation develop during training. Is the mismatch between different views on a validation/test-set a useful metric for cross validation? In general, it seems the method is sensitive to regularization and hyperparameter selection (because it has many more parameters compared to GCCA and different regularization parameters have been chosen for different views) and I wonder if there is a clear metric to optimize these.", "role": "assistant" } ]
[ "The proposed method is simple and elegant; it builds upon the huge success of gradient based optimization for deep non-linear function approximators and combines it with established (linear) many-view CCA methods.", "A major contribution of this paper is the derivation of the gradients with respect to the non-linear encoding networks which project the different views into a common space.", "The derivation seems correct.", "In general this approach seems very interesting and I could imagine that it might be applicable to many other similarly structured problems.", "The paper is well written; but it could be enhanced with an explicit description of the complete algorithm which also highlights how the joint embeddings G and U are updated.", "I don’t have prior experience with CCA-style many-view techniques and it is therefore hard for me to judge the practical/empirical progress presented here.", "But the experiments seem reasonable convincing; although generally only performed on small and medium sized datasets.", "Detailed comments: The colours or the sign of the x-axis in figure 3b seem to be flipped compared to figure 4.", "It would be nice to additionally see a continuous (rainbow-coloured) version for Figures 2, 3 and 4 to better identify neighbouring datapoints; but more importantly: I’d like to see how the average reconstruction error between the individual network outputs and the learned representation develop during training.", "Is the mismatch between different views on a validation/test-set a useful metric for cross validation?", "In general, it seems the method is sensitive to regularization and hyperparameter selection (because it has many more parameters compared to GCCA and different regularization parameters have been chosen for different views) and I wonder if there is a clear metric to optimize these." ]
{ "criticism": 1, "example": 2, "importance_and_relevance": 3, "materials_and_methods": 8, "praise": 5, "presentation_and_reporting": 4, "results_and_discussion": 0, "suggestion_and_solution": 2, "total": 11 }
0.090909
0.181818
0.272727
0.727273
0.454545
0.363636
0
0.181818
11
1
1
1
1
1
1
0
1
2.272727
2.209598
0.06313
Recurrent Inference Machines for Solving Inverse Problems
Inverse problems are typically solved by first defining a model and then choosing an inference procedure. With this separation of modeling from inference, inverse problems can be framed in a modular way. For example, variational inference can be applied to a broad class of models. The modularity, however, typically goes away after model parameters have been trained under a chosen inference procedure. During training, model and inference often interact in a way that the model parameters will ultimately be adapted to the chosen inference procedure, posing the two components inseparable after training. But if model and inference become inseperable after training, why separate them in the first place? We propose a novel learning framework which abandons the dichotomy between model and inference. Instead, we introduce Recurrent Inference Machines (RIM), a class of recurrent neural networks (RNN) that directly learn to solve inverse problems. We demonstrate the effectiveness of RIMs in experiments on various image reconstruction tasks. We show empirically that RIMs exhibit the desirable convergence behavior of classical inference procedures, and that they can outperform state-of- the-art methods when trained on specialized inference tasks. Our approach bridges the gap between inverse problems and deep learning, providing a framework for fast progression in the field of inverse problems.
Unfortunately, even after reading the authors* response to my pre-review question, I feel this paper in its current form lacks sufficient novelty to be accepted to ICLR. Fundamentally, the paper suggests that traditional iterative algorithms for specific class of problems (ill-posed image inverse problems) can be replaced by discriminatively trained recurrent networks. As R3 also notes, un-rolled networks for iterative inference aren*t new: they*ve been used to replace CRF-type inference, and _also_ to solve image inverse problems (my refs [1-3]). Therefore, I*d argue that the fundamental idea proposed by the paper isn*t new---it is just that the paper seeks to *formalize* it as an approach for inverse problems (although, there is nothing specific about the analysis that ties it to inverse problems: the paper only shows that the RIM can express gradient descent over prior + likelihood objective). I also did not find the claims about benefits over prior approaches very compelling. The comment about parameter sharing works both ways---it is possible that untying the parameters leads to better performance over a fewer number of *iterations*, and given that the *training set* is synthetically generated, learning a larger number of parameters doesn*t seem to be an issue. Also, I*d argue that sharing the parameters is the *obvious* approach, and the prior methods choose to not tie the parameters to get better accuracy. The same holds for being able to handle different noise levels / scale sizes. A single model can always be trained to handle multiple forms of degradation---its just that its likely to do better when it*s trained for specific degradation model/level. But more importantly, there is no evidence in the current set of experiments that shows that this is a property of the RIM architecture. (Moreover, this claim goes against one of the motivations of the paper of not training a single prior for different observation models ... but to train the entire inference architecture end-to-end). It is possible that the proposed method does offer practical benefits beyond prior work---but these benefits don*t come from the idea of simply unrolling iterations, which is not novel. I would strongly recommend that the authors consider a significant re-write of the paper---with a detailed discussion of prior work mentioned in the comments that highlights, with experiments, the specific aspects of their recurrent architecture that enables better recovery for inverse problems. I would also suggest that to claim the mantle of *solving inverse problems*, the paper consider a broader set of inverse tasks---in-painting, deconvolution, different noise models, and possibly working with multiple observations (like for HDR).
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRecurrent Inference Machines for Solving Inverse Problems\nInverse problems are typically solved by first defining a model and then choosing an inference procedure. With this separation of modeling from inference, inverse problems can be framed in a modular way. For example, variational inference can be applied to a broad class of models. The modularity, however, typically goes away after model parameters have been trained under a chosen inference procedure. During training, model and inference often interact in a way that the model parameters will ultimately be adapted to the chosen inference procedure, posing the two components inseparable after training. But if model and inference become inseperable after training, why separate them in the first place? We propose a novel learning framework which abandons the dichotomy between model and inference. Instead, we introduce Recurrent Inference Machines (RIM), a class of recurrent neural networks (RNN) that directly learn to solve inverse problems. We demonstrate the effectiveness of RIMs in experiments on various image reconstruction tasks. We show empirically that RIMs exhibit the desirable convergence behavior of classical inference procedures, and that they can outperform state-of- the-art methods when trained on specialized inference tasks. Our approach bridges the gap between inverse problems and deep learning, providing a framework for fast progression in the field of inverse problems.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nUnfortunately, even after reading the authors* response to my pre-review question, I feel this paper in its current form lacks sufficient novelty to be accepted to ICLR. Fundamentally, the paper suggests that traditional iterative algorithms for specific class of problems (ill-posed image inverse problems) can be replaced by discriminatively trained recurrent networks. As R3 also notes, un-rolled networks for iterative inference aren*t new: they*ve been used to replace CRF-type inference, and _also_ to solve image inverse problems (my refs [1-3]). Therefore, I*d argue that the fundamental idea proposed by the paper isn*t new---it is just that the paper seeks to *formalize* it as an approach for inverse problems (although, there is nothing specific about the analysis that ties it to inverse problems: the paper only shows that the RIM can express gradient descent over prior + likelihood objective). I also did not find the claims about benefits over prior approaches very compelling. The comment about parameter sharing works both ways---it is possible that untying the parameters leads to better performance over a fewer number of *iterations*, and given that the *training set* is synthetically generated, learning a larger number of parameters doesn*t seem to be an issue. Also, I*d argue that sharing the parameters is the *obvious* approach, and the prior methods choose to not tie the parameters to get better accuracy. The same holds for being able to handle different noise levels / scale sizes. A single model can always be trained to handle multiple forms of degradation---its just that its likely to do better when it*s trained for specific degradation model/level. But more importantly, there is no evidence in the current set of experiments that shows that this is a property of the RIM architecture. (Moreover, this claim goes against one of the motivations of the paper of not training a single prior for different observation models ... but to train the entire inference architecture end-to-end). It is possible that the proposed method does offer practical benefits beyond prior work---but these benefits don*t come from the idea of simply unrolling iterations, which is not novel. I would strongly recommend that the authors consider a significant re-write of the paper---with a detailed discussion of prior work mentioned in the comments that highlights, with experiments, the specific aspects of their recurrent architecture that enables better recovery for inverse problems. I would also suggest that to claim the mantle of *solving inverse problems*, the paper consider a broader set of inverse tasks---in-painting, deconvolution, different noise models, and possibly working with multiple observations (like for HDR).", "role": "assistant" } ]
[ "Unfortunately, even after reading the authors* response to my pre-review question, I feel this paper in its current form lacks sufficient novelty to be accepted to ICLR.", "Fundamentally, the paper suggests that traditional iterative algorithms for specific class of problems (ill-posed image inverse problems) can be replaced by discriminatively trained recurrent networks.", "As R3 also notes, un-rolled networks for iterative inference aren*t new: they*ve been used to replace CRF-type inference, and _also_ to solve image inverse problems (my refs [1-3]).", "Therefore, I*d argue that the fundamental idea proposed by the paper isn*t new---it is just that the paper seeks to *formalize* it as an approach for inverse problems (although, there is nothing specific about the analysis that ties it to inverse problems: the paper only shows that the RIM can express gradient descent over prior + likelihood objective).", "I also did not find the claims about benefits over prior approaches very compelling.", "The comment about parameter sharing works both ways---it is possible that untying the parameters leads to better performance over a fewer number of *iterations*, and given that the *training set* is synthetically generated, learning a larger number of parameters doesn*t seem to be an issue.", "Also, I*d argue that sharing the parameters is the *obvious* approach, and the prior methods choose to not tie the parameters to get better accuracy.", "The same holds for being able to handle different noise levels / scale sizes.", "A single model can always be trained to handle multiple forms of degradation---its just that its likely to do better when it*s trained for specific degradation model/level.", "But more importantly, there is no evidence in the current set of experiments that shows that this is a property of the RIM architecture.", "(Moreover, this claim goes against one of the motivations of the paper of not training a single prior for different observation models ... but to train the entire inference architecture end-to-end).", "It is possible that the proposed method does offer practical benefits beyond prior work---but these benefits don*t come from the idea of simply unrolling iterations, which is not novel.", "I would strongly recommend that the authors consider a significant re-write of the paper---with a detailed discussion of prior work mentioned in the comments that highlights, with experiments, the specific aspects of their recurrent architecture that enables better recovery for inverse problems.", "I would also suggest that to claim the mantle of *solving inverse problems*, the paper consider a broader set of inverse tasks---in-painting, deconvolution, different noise models, and possibly working with multiple observations (like for HDR)." ]
{ "criticism": 6, "example": 1, "importance_and_relevance": 3, "materials_and_methods": 13, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 5, "suggestion_and_solution": 2, "total": 14 }
0.428571
0.071429
0.214286
0.928571
0.071429
0
0.357143
0.142857
14
1
1
1
1
1
0
1
1
2.214286
2.198503
0.015782
Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a regularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods.
This paper presents a semi-supervised algorithm for regularizing deep convolutional neural networks. They propose an adversarial approach for image inpainting where the discriminator learns to identify whether an inpainted image comes from the data distribution or the generator, while at the same time it learns to recognize objects in an image from the data distribution. In experiments, they show the usefulness of their algorithm in which the features learned by the discriminator result in comparable or better object recognition performance to the reported state-of-the-art in two datasets. Overall, the proposed idea seems a simple yet an effective way for regularize CNNs to improve the classification performance.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemi-Supervised Learning with Context-Conditional Generative Adversarial Networks\nWe introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a regularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a semi-supervised algorithm for regularizing deep convolutional neural networks. They propose an adversarial approach for image inpainting where the discriminator learns to identify whether an inpainted image comes from the data distribution or the generator, while at the same time it learns to recognize objects in an image from the data distribution. In experiments, they show the usefulness of their algorithm in which the features learned by the discriminator result in comparable or better object recognition performance to the reported state-of-the-art in two datasets. Overall, the proposed idea seems a simple yet an effective way for regularize CNNs to improve the classification performance.", "role": "assistant" } ]
[ "This paper presents a semi-supervised algorithm for regularizing deep convolutional neural networks.", "They propose an adversarial approach for image inpainting where the discriminator learns to identify whether an inpainted image comes from the data distribution or the generator, while at the same time it learns to recognize objects in an image from the data distribution.", "In experiments, they show the usefulness of their algorithm in which the features learned by the discriminator result in comparable or better object recognition performance to the reported state-of-the-art in two datasets.", "Overall, the proposed idea seems a simple yet an effective way for regularize CNNs to improve the classification performance." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 4 }
0
0
0.5
1
0.5
0
0.25
0
4
0
0
1
1
1
0
1
0
2.25
0.971623
1.278377
Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a regularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods.
This paper proposes a method to incorporate super-resolution and inpainting in the GAN framework for semi-supervised learning using the GAN discriminative features on larger images. The core idea of the paper is not very novel. The usefulness of the GAN discriminative features for semi-supervised learning is already established in previous works such as CatGAN, DCGAN and Salimans et al. However this paper does a good job in actually getting the semi-supervised GAN framework working on larger images such as STL-10 and Pascal datasets using the proposed context conditioning approach, and achieves the state-of-the-art on these datasets. I think that the authors should provide the SSL-GAN baseline for the PASCAL dataset as it is very important to compare the contribution of the context conditioning idea with the standard way of using GAN for semi-supervised learning, i.e., SSL-GAN. I can*t see why the SSL-GAN can not be applied to the 64*64 and 96*96 version of the Pascal dataset (Table 3). If they have trouble training the vanilla GAN on Pascal even on the 64*64 image size, this should be mentioned in the paper and be explained. I am concerned about this specially because CC-GAN almost matches the SSL-GAN baseline on STL-10, and CC-GAN2, to me, seems like a hacky way to improve upon the core CC-GAN idea. So it would be great to compare CC-GAN and SSL-GAN on some other dataset, even if it is a downsampled PASCAL dataset.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemi-Supervised Learning with Context-Conditional Generative Adversarial Networks\nWe introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a regularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a method to incorporate super-resolution and inpainting in the GAN framework for semi-supervised learning using the GAN discriminative features on larger images. The core idea of the paper is not very novel. The usefulness of the GAN discriminative features for semi-supervised learning is already established in previous works such as CatGAN, DCGAN and Salimans et al. However this paper does a good job in actually getting the semi-supervised GAN framework working on larger images such as STL-10 and Pascal datasets using the proposed context conditioning approach, and achieves the state-of-the-art on these datasets. I think that the authors should provide the SSL-GAN baseline for the PASCAL dataset as it is very important to compare the contribution of the context conditioning idea with the standard way of using GAN for semi-supervised learning, i.e., SSL-GAN. I can*t see why the SSL-GAN can not be applied to the 64*64 and 96*96 version of the Pascal dataset (Table 3). If they have trouble training the vanilla GAN on Pascal even on the 64*64 image size, this should be mentioned in the paper and be explained. I am concerned about this specially because CC-GAN almost matches the SSL-GAN baseline on STL-10, and CC-GAN2, to me, seems like a hacky way to improve upon the core CC-GAN idea. So it would be great to compare CC-GAN and SSL-GAN on some other dataset, even if it is a downsampled PASCAL dataset.", "role": "assistant" } ]
[ "This paper proposes a method to incorporate super-resolution and inpainting in the GAN framework for semi-supervised learning using the GAN discriminative features on larger images.", "The core idea of the paper is not very novel.", "The usefulness of the GAN discriminative features for semi-supervised learning is already established in previous works such as CatGAN, DCGAN and Salimans et al.", "However this paper does a good job in actually getting the semi-supervised GAN framework working on larger images such as STL-10 and Pascal datasets using the proposed context conditioning approach, and achieves the state-of-the-art on these datasets.", "I think that the authors should provide the SSL-GAN baseline for the PASCAL dataset as it is very important to compare the contribution of the context conditioning idea with the standard way of using GAN for semi-supervised learning, i.e., SSL-GAN.", "I can*t see why the SSL-GAN can not be applied to the 64*64 and 96*96 version of the Pascal dataset (Table 3).", "If they have trouble training the vanilla GAN on Pascal even on the 64*64 image size, this should be mentioned in the paper and be explained.", "I am concerned about this specially because CC-GAN almost matches the SSL-GAN baseline on STL-10, and CC-GAN2, to me, seems like a hacky way to improve upon the core CC-GAN idea.", "So it would be great to compare CC-GAN and SSL-GAN on some other dataset, even if it is a downsampled PASCAL dataset." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 7, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 0, "suggestion_and_solution": 3, "total": 9 }
0.222222
0
0.333333
0.777778
0.222222
0.333333
0
0.333333
9
1
0
1
1
1
1
0
1
2.222222
1.969703
0.252519
Investigating Different Context Types and Representations for Learning Word Embeddings
The number of word embedding models is growing every year. Most of them learn word embeddings based on the co-occurrence information of words and their context. However, it*s still an open question what is the best definition of context. We provide the first systematical investigation of different context types and representations for learning word embeddings. We conduct comprehensive experiments to evaluate their effectiveness under 4 tasks (21 datasets), which give us some insights about context selection. We hope that this paper, along with the published code, can serve as a guideline of choosing context for our community.
This paper analyzes dependency trees vs standard window contexts for word vector learning. While that*s a good goal I believe the paper falls short of a thorough analysis of the subject matter. It does not analyze Glove like objective functions which often work better than the algorithms used here. It doesn*t compare in absolute terms to other published vectors or models. It fails to gain any particularly interesting insights that will modify other people*s work. It fails to push the state of the art or make available new resources for people.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInvestigating Different Context Types and Representations for Learning Word Embeddings\nThe number of word embedding models is growing every year. Most of them learn word embeddings based on the co-occurrence information of words and their context. However, it*s still an open question what is the best definition of context. We provide the first systematical investigation of different context types and representations for learning word embeddings. We conduct comprehensive experiments to evaluate their effectiveness under 4 tasks (21 datasets), which give us some insights about context selection. We hope that this paper, along with the published code, can serve as a guideline of choosing context for our community.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper analyzes dependency trees vs standard window contexts for word vector learning. While that*s a good goal I believe the paper falls short of a thorough analysis of the subject matter. It does not analyze Glove like objective functions which often work better than the algorithms used here. It doesn*t compare in absolute terms to other published vectors or models. It fails to gain any particularly interesting insights that will modify other people*s work. It fails to push the state of the art or make available new resources for people.", "role": "assistant" } ]
[ "This paper analyzes dependency trees vs standard window contexts for word vector learning.", "While that*s a good goal I believe the paper falls short of a thorough analysis of the subject matter.", "It does not analyze Glove like objective functions which often work better than the algorithms used here.", "It doesn*t compare in absolute terms to other published vectors or models.", "It fails to gain any particularly interesting insights that will modify other people*s work.", "It fails to push the state of the art or make available new resources for people." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 0, "suggestion_and_solution": 0, "total": 6 }
0.666667
0
0.333333
0.666667
0.333333
0.166667
0
0
6
1
0
1
1
1
1
0
0
2.166667
1.393328
0.773339
Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units
We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron*s input. This stochastic regularizer is comparable to nonlinearities aided by dropout, but it removes the need for a traditional nonlinearity. The connection between the GELU and the stochastic regularizer suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks.
The proposed regularizer seems to be a particular combination of existing methods. Though the implied connection between nonlinearities and stochastic regularizers is intriguing, in my opinion the empirical performance does not exceed the performance achieved by similar methods by a large enough margin to arrive at a meaningful conclusion.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units\nWe propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron*s input. This stochastic regularizer is comparable to nonlinearities aided by dropout, but it removes the need for a traditional nonlinearity. The connection between the GELU and the stochastic regularizer suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe proposed regularizer seems to be a particular combination of existing methods. Though the implied connection between nonlinearities and stochastic regularizers is intriguing, in my opinion the empirical performance does not exceed the performance achieved by similar methods by a large enough margin to arrive at a meaningful conclusion.", "role": "assistant" } ]
[ "The proposed regularizer seems to be a particular combination of existing methods.", "Though the implied connection between nonlinearities and stochastic regularizers is intriguing, in my opinion the empirical performance does not exceed the performance achieved by similar methods by a large enough margin to arrive at a meaningful conclusion." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 2, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 2 }
0.5
0
0
1
0.5
0
0.5
0
2
1
0
0
1
1
0
1
0
2.5
0.590326
1.909674
The Variational Walkback Algorithm
A recognized obstacle to training undirected graphical models with latent variables such as Boltzmann machines is that the maximum likelihood training procedure requires sampling from Monte-Carlo Markov chains which may not mix well, in the inner loop of training, for each example. We first propose the idea that it is sufficient to locally carve the energy function everywhere so that its gradient points in the *right* direction (i.e., towards generating the data). Following on previous work on contrastive divergence, denoising autoencoders, generative stochastic networks and unsupervised learning using non-equilibrium dynamics, we propose a variational bound on the marginal log-likelihood of the data which corresponds to a new learning procedure that first walks away from data points by following the model transition operator and then trains that operator to walk backwards for each of these steps, back towards the training example. The tightness of the variational bound relies on gradually increasing temperature as we walk away from the data, at each step providing a gradient on the parameters to maximize the probability that the transition operator returns to its previous state. Interestingly, this algorithm admits a variant where there is no explicit energy function, i.e., the parameters are used to directly define the transition operator. This also eliminates the explicit need for symmetric weights which previous Boltzmann machine or Hopfield net models require, and which makes these models less biologically plausible.
The authors present a method for training probabilistic models by maximizing a stochastic variational-lower-bound-type objective. Training involves sampling and then learning a transition-based inference to *walk back* samples to the data. Because of its focus on transitions, it can be used to learn a raw transition operator rather than purely learning an energy-based model. The objective is intuitively appealing because of its similarity to previous successful but less principled training methods for MRFs like Contrastive Divergence. The idea for the algorithm is appealing, and it looks like it could find a nice place in the literature. However, the submission in its current form is not yet ready for publication. Experiments are qualitative and the generated samples are not obviously indicative of a high model quality. As pointed out elsewhere, the mathematical analysis does not currently demonstrate tightness of the variational bound in the case of a learned transition operator. More evaluation using e.g. annealed importance sampling to estimate held-out likelihoods is necessary. Assuming that the analysis can be repaired, the ability to directly parametrize a transition operator, an interesting strength of this method, should be explored in further experiments and contrasted with the more standard energy-based modeling. This looks like a promising idea, and other reviews and questions have already raised some important technical points which should help strengthen this paper for future submission.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Variational Walkback Algorithm\nA recognized obstacle to training undirected graphical models with latent variables such as Boltzmann machines is that the maximum likelihood training procedure requires sampling from Monte-Carlo Markov chains which may not mix well, in the inner loop of training, for each example. We first propose the idea that it is sufficient to locally carve the energy function everywhere so that its gradient points in the *right* direction (i.e., towards generating the data). Following on previous work on contrastive divergence, denoising autoencoders, generative stochastic networks and unsupervised learning using non-equilibrium dynamics, we propose a variational bound on the marginal log-likelihood of the data which corresponds to a new learning procedure that first walks away from data points by following the model transition operator and then trains that operator to walk backwards for each of these steps, back towards the training example. The tightness of the variational bound relies on gradually increasing temperature as we walk away from the data, at each step providing a gradient on the parameters to maximize the probability that the transition operator returns to its previous state. Interestingly, this algorithm admits a variant where there is no explicit energy function, i.e., the parameters are used to directly define the transition operator. This also eliminates the explicit need for symmetric weights which previous Boltzmann machine or Hopfield net models require, and which makes these models less biologically plausible.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present a method for training probabilistic models by maximizing a stochastic variational-lower-bound-type objective. Training involves sampling and then learning a transition-based inference to *walk back* samples to the data. Because of its focus on transitions, it can be used to learn a raw transition operator rather than purely learning an energy-based model. The objective is intuitively appealing because of its similarity to previous successful but less principled training methods for MRFs like Contrastive Divergence. The idea for the algorithm is appealing, and it looks like it could find a nice place in the literature. However, the submission in its current form is not yet ready for publication. Experiments are qualitative and the generated samples are not obviously indicative of a high model quality. As pointed out elsewhere, the mathematical analysis does not currently demonstrate tightness of the variational bound in the case of a learned transition operator. More evaluation using e.g. annealed importance sampling to estimate held-out likelihoods is necessary. Assuming that the analysis can be repaired, the ability to directly parametrize a transition operator, an interesting strength of this method, should be explored in further experiments and contrasted with the more standard energy-based modeling. This looks like a promising idea, and other reviews and questions have already raised some important technical points which should help strengthen this paper for future submission.", "role": "assistant" } ]
[ "The authors present a method for training probabilistic models by maximizing a stochastic variational-lower-bound-type objective.", "Training involves sampling and then learning a transition-based inference to *walk back* samples to the data.", "Because of its focus on transitions, it can be used to learn a raw transition operator rather than purely learning an energy-based model.", "The objective is intuitively appealing because of its similarity to previous successful but less principled training methods for MRFs like Contrastive Divergence.", "The idea for the algorithm is appealing, and it looks like it could find a nice place in the literature.", "However, the submission in its current form is not yet ready for publication.", "Experiments are qualitative and the generated samples are not obviously indicative of a high model quality.", "As pointed out elsewhere, the mathematical analysis does not currently demonstrate tightness of the variational bound in the case of a learned transition operator.", "More evaluation using e.g.", "annealed importance sampling to estimate held-out likelihoods is necessary.", "Assuming that the analysis can be repaired, the ability to directly parametrize a transition operator, an interesting strength of this method, should be explored in further experiments and contrasted with the more standard energy-based modeling.", "This looks like a promising idea, and other reviews and questions have already raised some important technical points which should help strengthen this paper for future submission." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 10, "praise": 4, "presentation_and_reporting": 0, "results_and_discussion": 2, "suggestion_and_solution": 5, "total": 12 }
0.25
0
0.25
0.833333
0.333333
0
0.166667
0.416667
12
1
0
1
1
1
0
1
1
2.25
2.234218
0.015782
The Variational Walkback Algorithm
A recognized obstacle to training undirected graphical models with latent variables such as Boltzmann machines is that the maximum likelihood training procedure requires sampling from Monte-Carlo Markov chains which may not mix well, in the inner loop of training, for each example. We first propose the idea that it is sufficient to locally carve the energy function everywhere so that its gradient points in the *right* direction (i.e., towards generating the data). Following on previous work on contrastive divergence, denoising autoencoders, generative stochastic networks and unsupervised learning using non-equilibrium dynamics, we propose a variational bound on the marginal log-likelihood of the data which corresponds to a new learning procedure that first walks away from data points by following the model transition operator and then trains that operator to walk backwards for each of these steps, back towards the training example. The tightness of the variational bound relies on gradually increasing temperature as we walk away from the data, at each step providing a gradient on the parameters to maximize the probability that the transition operator returns to its previous state. Interestingly, this algorithm admits a variant where there is no explicit energy function, i.e., the parameters are used to directly define the transition operator. This also eliminates the explicit need for symmetric weights which previous Boltzmann machine or Hopfield net models require, and which makes these models less biologically plausible.
This paper proposes a new kind of generative model based on an annealing process, where the transition probabilities are learned directly to maximize a variational lower bound on the log-likelihood. Overall, the idea is clever and appealing, but I think the paper needs more quantitative validation and better discussion of the relationship with prior work. In terms of prior work, AIS and RAISE are both closely related algorithms, and share much of the mathematical structure with the proposed method. For this reason, it’s not sufficient to mention them in passing in the related work section; those methods and their relationship to variational walkback need to be discussed in detail. If I understand correctly, the proposed method is essentially an extension of RAISE where the transition probabilities are learned rather than fixed based on an existing MRF. I think this is an interesting and worthwhile extension, but the relationship to existing work needs to be clarified. The analysis of Appendix D seems incorrect. It derives a formula for the ratios of prior and posterior probabilities, but this formula only holds under the assumption of constant temperature (in which case the ratio is very large). When the temperature is varied, the analysis of Neal (2001) applies, and the answer is different. One of the main selling points of the method is that it optimizes a variational lower bound on the log-likelihood; even more accurate estimates can be obtained using importance sampling. It ought to be easy to report log-likelihood estimates for this method, so I wonder why such estimates aren’t reported. There are lots of prior results to compare against on MNIST. (In addition, a natural baseline would be RAISE, so that one can check if the ability to learn the transitions actually helps.) I think the basic idea here is a sound one, so I would be willing to raise my score if the above issues are addressed in a revised version. Minor comments: “A recognized obstacle to training undirected graphical models… is that ML training requires sampling from MCMC chains in the inner loop of training, for each example.” This seems like an unfair characterization, since the standard algorithm is PCD, which usually takes only a single step per mini-batch. Some of the methods discussed in the related work are missing citations. The method is justified in terms of “carving the energy function in the right direction at each point”, but I’m not sure this is actually what’s happening. Isn’t the point of the method that it can optimize a lower bound on the log-likelihood, and therefore learn a globally correct allocation of probability mass?
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nThe Variational Walkback Algorithm\nA recognized obstacle to training undirected graphical models with latent variables such as Boltzmann machines is that the maximum likelihood training procedure requires sampling from Monte-Carlo Markov chains which may not mix well, in the inner loop of training, for each example. We first propose the idea that it is sufficient to locally carve the energy function everywhere so that its gradient points in the *right* direction (i.e., towards generating the data). Following on previous work on contrastive divergence, denoising autoencoders, generative stochastic networks and unsupervised learning using non-equilibrium dynamics, we propose a variational bound on the marginal log-likelihood of the data which corresponds to a new learning procedure that first walks away from data points by following the model transition operator and then trains that operator to walk backwards for each of these steps, back towards the training example. The tightness of the variational bound relies on gradually increasing temperature as we walk away from the data, at each step providing a gradient on the parameters to maximize the probability that the transition operator returns to its previous state. Interestingly, this algorithm admits a variant where there is no explicit energy function, i.e., the parameters are used to directly define the transition operator. This also eliminates the explicit need for symmetric weights which previous Boltzmann machine or Hopfield net models require, and which makes these models less biologically plausible.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a new kind of generative model based on an annealing process, where the transition probabilities are learned directly to maximize a variational lower bound on the log-likelihood. Overall, the idea is clever and appealing, but I think the paper needs more quantitative validation and better discussion of the relationship with prior work. In terms of prior work, AIS and RAISE are both closely related algorithms, and share much of the mathematical structure with the proposed method. For this reason, it’s not sufficient to mention them in passing in the related work section; those methods and their relationship to variational walkback need to be discussed in detail. If I understand correctly, the proposed method is essentially an extension of RAISE where the transition probabilities are learned rather than fixed based on an existing MRF. I think this is an interesting and worthwhile extension, but the relationship to existing work needs to be clarified. The analysis of Appendix D seems incorrect. It derives a formula for the ratios of prior and posterior probabilities, but this formula only holds under the assumption of constant temperature (in which case the ratio is very large). When the temperature is varied, the analysis of Neal (2001) applies, and the answer is different. One of the main selling points of the method is that it optimizes a variational lower bound on the log-likelihood; even more accurate estimates can be obtained using importance sampling. It ought to be easy to report log-likelihood estimates for this method, so I wonder why such estimates aren’t reported. There are lots of prior results to compare against on MNIST. (In addition, a natural baseline would be RAISE, so that one can check if the ability to learn the transitions actually helps.) I think the basic idea here is a sound one, so I would be willing to raise my score if the above issues are addressed in a revised version. Minor comments: “A recognized obstacle to training undirected graphical models… is that ML training requires sampling from MCMC chains in the inner loop of training, for each example.” This seems like an unfair characterization, since the standard algorithm is PCD, which usually takes only a single step per mini-batch. Some of the methods discussed in the related work are missing citations. The method is justified in terms of “carving the energy function in the right direction at each point”, but I’m not sure this is actually what’s happening. Isn’t the point of the method that it can optimize a lower bound on the log-likelihood, and therefore learn a globally correct allocation of probability mass?", "role": "assistant" } ]
[ "This paper proposes a new kind of generative model based on an annealing process, where the transition probabilities are learned directly to maximize a variational lower bound on the log-likelihood.", "Overall, the idea is clever and appealing, but I think the paper needs more quantitative validation and better discussion of the relationship with prior work.", "In terms of prior work, AIS and RAISE are both closely related algorithms, and share much of the mathematical structure with the proposed method.", "For this reason, it’s not sufficient to mention them in passing in the related work section; those methods and their relationship to variational walkback need to be discussed in detail.", "If I understand correctly, the proposed method is essentially an extension of RAISE where the transition probabilities are learned rather than fixed based on an existing MRF.", "I think this is an interesting and worthwhile extension, but the relationship to existing work needs to be clarified.", "The analysis of Appendix D seems incorrect.", "It derives a formula for the ratios of prior and posterior probabilities, but this formula only holds under the assumption of constant temperature (in which case the ratio is very large).", "When the temperature is varied, the analysis of Neal (2001) applies, and the answer is different.", "One of the main selling points of the method is that it optimizes a variational lower bound on the log-likelihood; even more accurate estimates can be obtained using importance sampling.", "It ought to be easy to report log-likelihood estimates for this method, so I wonder why such estimates aren’t reported.", "There are lots of prior results to compare against on MNIST.", "(In addition, a natural baseline would be RAISE, so that one can check if the ability to learn the transitions actually helps.)", "I think the basic idea here is a sound one, so I would be willing to raise my score if the above issues are addressed in a revised version.", "Minor comments: “A recognized obstacle to training undirected graphical models… is that ML training requires sampling from MCMC chains in the inner loop of training, for each example.” This seems like an unfair characterization, since the standard algorithm is PCD, which usually takes only a single step per mini-batch.", "Some of the methods discussed in the related work are missing citations.", "The method is justified in terms of “carving the energy function in the right direction at each point”, but I’m not sure this is actually what’s happening.", "Isn’t the point of the method that it can optimize a lower bound on the log-likelihood, and therefore learn a globally correct allocation of probability mass?" ]
{ "criticism": 5, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 14, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 9, "suggestion_and_solution": 6, "total": 18 }
0.277778
0
0.166667
0.777778
0.111111
0.166667
0.5
0.333333
18
1
0
1
1
1
1
1
1
2.333333
1.938773
0.394561
Improving Stochastic Gradient Descent with Feedback
In this paper we propose a simple and efficient method for improving stochastic gradient descent methods by using feedback from the objective function. The method tracks the relative changes in the objective function with a running average, and uses it to adaptively tune the learning rate in stochastic gradient descent. We specifically apply this idea to modify Adam, a popular algorithm for training deep neural networks. We conduct experiments to compare the resulting algorithm, which we call Eve, with state of the art methods used for training deep learning models. We train CNNs for image classification, and RNNs for language modeling and question answering. Our experiments show that Eve outperforms all other algorithms on these benchmark tasks. We also analyze the behavior of the feedback mechanism during the training process.
The paper introduced an extension of Adam optimizer that automatically adjust learning rate by comparing the subsequent values of the cost function during training. The authors empirically demonstrated the benefit of the Eve optimizer on CIFAR convnets, logistic regression and RNN problems. I have the following concerns about the paper - The proposed method is VARIANT to arbitrary shifts and scaling to the cost function. - A more fair comparison with other baseline methods would be using additional exponential decay learning scheduling between the lower and upper threshold of d_t. I suspect 1/d_t just shrinks as an exponential decay from Figure 2. - Three additional hyper-parameters: k, K, _x0008_eta_3. Overall, I think the method has its fundamental flew and the paper offers very limited novelty. There is no theoretical justification on the modification, and it would be good for the authors to discuss the potential failure mode of the proposed method. Furthermore, it is hard for me to follow Section 3.2. The writing quality and clarity of the method section can be further improved.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nImproving Stochastic Gradient Descent with Feedback\nIn this paper we propose a simple and efficient method for improving stochastic gradient descent methods by using feedback from the objective function. The method tracks the relative changes in the objective function with a running average, and uses it to adaptively tune the learning rate in stochastic gradient descent. We specifically apply this idea to modify Adam, a popular algorithm for training deep neural networks. We conduct experiments to compare the resulting algorithm, which we call Eve, with state of the art methods used for training deep learning models. We train CNNs for image classification, and RNNs for language modeling and question answering. Our experiments show that Eve outperforms all other algorithms on these benchmark tasks. We also analyze the behavior of the feedback mechanism during the training process.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper introduced an extension of Adam optimizer that automatically adjust learning rate by comparing the subsequent values of the cost function during training. The authors empirically demonstrated the benefit of the Eve optimizer on CIFAR convnets, logistic regression and RNN problems. I have the following concerns about the paper - The proposed method is VARIANT to arbitrary shifts and scaling to the cost function. - A more fair comparison with other baseline methods would be using additional exponential decay learning scheduling between the lower and upper threshold of d_t. I suspect 1/d_t just shrinks as an exponential decay from Figure 2. - Three additional hyper-parameters: k, K, _x0008_eta_3. Overall, I think the method has its fundamental flew and the paper offers very limited novelty. There is no theoretical justification on the modification, and it would be good for the authors to discuss the potential failure mode of the proposed method. Furthermore, it is hard for me to follow Section 3.2. The writing quality and clarity of the method section can be further improved.", "role": "assistant" } ]
[ "The paper introduced an extension of Adam optimizer that automatically adjust learning rate by comparing the subsequent values of the cost function during training.", "The authors empirically demonstrated the benefit of the Eve optimizer on CIFAR convnets, logistic regression and RNN problems.", "I have the following concerns about the paper - The proposed method is VARIANT to arbitrary shifts and scaling to the cost function.", "- A more fair comparison with other baseline methods would be using additional exponential decay learning scheduling between the lower and upper threshold of d_t.", "I suspect 1/d_t just shrinks as an exponential decay from Figure 2.", "- Three additional hyper-parameters: k, K, _x0008_eta_3.", "Overall, I think the method has its fundamental flew and the paper offers very limited novelty.", "There is no theoretical justification on the modification, and it would be good for the authors to discuss the potential failure mode of the proposed method.", "Furthermore, it is hard for me to follow Section 3.2.", "The writing quality and clarity of the method section can be further improved." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 8, "praise": 2, "presentation_and_reporting": 2, "results_and_discussion": 4, "suggestion_and_solution": 3, "total": 10 }
0.3
0
0.1
0.8
0.2
0.2
0.4
0.3
10
1
0
1
1
1
1
1
1
2.3
2.157958
0.142042
Regularizing Neural Networks by Penalizing Confident Output Distributions
We propose regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. We connect our confidence penalty to label smoothing through the direction of the KL divergence between networks output distribution and the uniform distribution. We exhaustively evaluate our proposed confidence penalty and label smoothing (uniform and unigram) on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT*14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and our confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyper-parameters.
The paper experimentally investigates a slightly modified version of label smoothing technique for neural network training, and reports results on various tasks. Such smoothing idea is not new, but was not investigated previously in wide range of machine learning tasks. Comments: The paper should report the state-of-the-art results for speech recognition tasks (TIMIT, WSJ), even if models are not directly comparable. The error back-propagation of label smoothing through softmax is straightforward and efficient. Is there an efficient solution for BP of the entropy smoothing through softmax? Although the classification accuracy could remain the same, the model will not estimate the true posterior distribution with this kind of smoothing. This might be an issue in complex machine learning problems where the decision is made on higher level and based on the posterior estimations, e.g. language models in speech recognition. More motivation is necessary for the proposed smoothing.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRegularizing Neural Networks by Penalizing Confident Output Distributions\nWe propose regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. We connect our confidence penalty to label smoothing through the direction of the KL divergence between networks output distribution and the uniform distribution. We exhaustively evaluate our proposed confidence penalty and label smoothing (uniform and unigram) on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT*14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and our confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyper-parameters.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper experimentally investigates a slightly modified version of label smoothing technique for neural network training, and reports results on various tasks. Such smoothing idea is not new, but was not investigated previously in wide range of machine learning tasks. Comments: The paper should report the state-of-the-art results for speech recognition tasks (TIMIT, WSJ), even if models are not directly comparable. The error back-propagation of label smoothing through softmax is straightforward and efficient. Is there an efficient solution for BP of the entropy smoothing through softmax? Although the classification accuracy could remain the same, the model will not estimate the true posterior distribution with this kind of smoothing. This might be an issue in complex machine learning problems where the decision is made on higher level and based on the posterior estimations, e.g. language models in speech recognition. More motivation is necessary for the proposed smoothing.", "role": "assistant" } ]
[ "The paper experimentally investigates a slightly modified version of label smoothing technique for neural network training, and reports results on various tasks.", "Such smoothing idea is not new, but was not investigated previously in wide range of machine learning tasks.", "Comments: The paper should report the state-of-the-art results for speech recognition tasks (TIMIT, WSJ), even if models are not directly comparable.", "The error back-propagation of label smoothing through softmax is straightforward and efficient.", "Is there an efficient solution for BP of the entropy smoothing through softmax?", "Although the classification accuracy could remain the same, the model will not estimate the true posterior distribution with this kind of smoothing.", "This might be an issue in complex machine learning problems where the decision is made on higher level and based on the posterior estimations, e.g.", "language models in speech recognition.", "More motivation is necessary for the proposed smoothing." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 9, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 6, "suggestion_and_solution": 2, "total": 9 }
0.111111
0
0.111111
1
0.111111
0
0.666667
0.222222
9
1
0
1
1
1
0
1
1
2.222222
1.969703
0.252519
Options Discovery with Budgeted Reinforcement Learning
We consider the problem of learning hierarchical policies for Reinforcement Learning able to discover options, an option corresponding to a sub-policy over a set of primitive actions. Different models have been proposed during the last decade that usually rely on a predefined set of options. We specifically address the problem of automatically discovering options in decision processes. We describe a new RL learning framework called Bi-POMDP, and a new learning model called Budgeted Option Neural Network (BONN) able to discover options based on a budgeted learning objective. Since Bi-POMDP are more general than POMDP, our model can also be used to discover options for classical RL tasks. The BONN model is evaluated on different classical RL problems, demonstrating both quantitative and qualitative interesting results.
The paper tackles the very important problem of learning options from data. The introduction of the budget constraint is an interesting twist on this problem, which I had not seen before (though other methods apply other constraints.) I must say I’m not very convinced by the need to introduce the Bi-POMDP framework, where the conventional POMDP framework would do. In discussions, the authors suggest this makes for simpler comparison with RL models, but I find that it rather obscures the link to POMDP models. The proposed method makes an interesting contribution, distinct from the existing literature as far as I know. The extension to discover a discrete set of options is a nice feature for practical applications. In terms of the algorithm itself, I am actually unclear about lines 4 & 6. At line 4, I don’t know how sigma_t is computed. Can you give the precise equation? At line 6, I don’t know how the new option o_t is generated. Again, can you give the precise procedure? The paper contains several empirical results, on contrasting simulated domains. For some of these domains, such as CartPole, it’s really not clear that options are necessary. In my mind, the lack of comparison to other options learning methods is a limitation of the current draft.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOptions Discovery with Budgeted Reinforcement Learning\nWe consider the problem of learning hierarchical policies for Reinforcement Learning able to discover options, an option corresponding to a sub-policy over a set of primitive actions. Different models have been proposed during the last decade that usually rely on a predefined set of options. We specifically address the problem of automatically discovering options in decision processes. We describe a new RL learning framework called Bi-POMDP, and a new learning model called Budgeted Option Neural Network (BONN) able to discover options based on a budgeted learning objective. Since Bi-POMDP are more general than POMDP, our model can also be used to discover options for classical RL tasks. The BONN model is evaluated on different classical RL problems, demonstrating both quantitative and qualitative interesting results.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper tackles the very important problem of learning options from data. The introduction of the budget constraint is an interesting twist on this problem, which I had not seen before (though other methods apply other constraints.) I must say I’m not very convinced by the need to introduce the Bi-POMDP framework, where the conventional POMDP framework would do. In discussions, the authors suggest this makes for simpler comparison with RL models, but I find that it rather obscures the link to POMDP models. The proposed method makes an interesting contribution, distinct from the existing literature as far as I know. The extension to discover a discrete set of options is a nice feature for practical applications. In terms of the algorithm itself, I am actually unclear about lines 4 & 6. At line 4, I don’t know how sigma_t is computed. Can you give the precise equation? At line 6, I don’t know how the new option o_t is generated. Again, can you give the precise procedure? The paper contains several empirical results, on contrasting simulated domains. For some of these domains, such as CartPole, it’s really not clear that options are necessary. In my mind, the lack of comparison to other options learning methods is a limitation of the current draft.", "role": "assistant" } ]
[ "The paper tackles the very important problem of learning options from data.", "The introduction of the budget constraint is an interesting twist on this problem, which I had not seen before (though other methods apply other constraints.)", "I must say I’m not very convinced by the need to introduce the Bi-POMDP framework, where the conventional POMDP framework would do.", "In discussions, the authors suggest this makes for simpler comparison with RL models, but I find that it rather obscures the link to POMDP models.", "The proposed method makes an interesting contribution, distinct from the existing literature as far as I know.", "The extension to discover a discrete set of options is a nice feature for practical applications.", "In terms of the algorithm itself, I am actually unclear about lines 4 & 6.", "At line 4, I don’t know how sigma_t is computed.", "Can you give the precise equation?", "At line 6, I don’t know how the new option o_t is generated.", "Again, can you give the precise procedure?", "The paper contains several empirical results, on contrasting simulated domains.", "For some of these domains, such as CartPole, it’s really not clear that options are necessary.", "In my mind, the lack of comparison to other options learning methods is a limitation of the current draft." ]
{ "criticism": 8, "example": 3, "importance_and_relevance": 4, "materials_and_methods": 13, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 3, "total": 14 }
0.571429
0.214286
0.285714
0.928571
0.285714
0.071429
0.142857
0.214286
14
1
1
1
1
1
1
1
1
2.714286
2.698503
0.015782
DRAGNN: A Transition-Based Framework for Dynamically Connected Neural Networks
In this work, we present a compact, modular framework for constructing new recurrent neural architectures. Our basic module is a new generic unit, the Transition Based Recurrent Unit (TBRU). In addition to hidden layer activations, TBRUs have discrete state dynamics that allow network connections to be built dynamically as a function of intermediate activations. By connecting multiple TBRUs, we can extend and combine commonly used architectures such as sequence-to-sequence, attention mechanisms, and recursive tree-structured models. A TBRU can also serve as both an {em encoder} for downstream tasks and as a {em decoder} for its own task simultaneously, resulting in more accurate multi-task learning. We call our approach Dynamic Recurrent Acyclic Graphical Neural Networks, or DRAGNN. We show that DRAGNN is significantly more accurate and efficient than seq2seq with attention for syntactic dependency parsing and yields more accurate multi-task learning for extractive summarization tasks.
The authors present a general framework for defining a wide variety of recurrent neural network architectures, including seq2seq models, tree-structured models, attention, and a new family of dynamically connected architectures. The framework defines a new, general-purpose recurrent unit called the TBRU, which takes a transition system, defining and constraining its inputs and outputs, and input function which defines the mapping between raw inputs and fixed-width vector representations, and recurrence function that defines the inputs to each recurrent step as a function of the current state, and an RNN cell that computes the output from the input (fixed and recurrent). Many example instantiations of this framework are provided, including sequential tagging RNNs, Google’s Parsey McParseface parser, encoder/decoder networks, tree LSTMs and less familiar examples that demonstrate the power this framework. The most interesting contribution of this work is the ease by which it can be used to incorporate dynamic recurrent connections through the definition of the transition system. In particular, this paper explores the application of these dynamic connections to syntactic dependency parsing, both as a standalone task, and by multitasking parsing with extractive summarization, using the same compositional phrase representations as features for the parser and summarization (previous work used discrete parse features), which is particularly simple/elegant in this framework. In experimental results, the authors demonstrate that such multitasking leads to more accurate summarization models, and using the framework to incorporate more structure into existing parsing models also leads to increased accuracy with no big-oh efficiency loss (compared with e.g. attention). The “raison d’etre,” in particular the example, perhaps described even more thoroughly/explicitly, should be made as clear as possible as soon as possible. This is the most important contribution, but it gets lost in the description and presentation as a framework — emphasizing that attention, seq2seq, etc can be represented in the framework is distracting and makes it seem less novel than it is. AnonReviewer6 clearly missed this point, as did I in my first pass over the paper. To get this idea across and to emphasize the benefits of this representation, I’d love to see more detailed analysis of these representations and their importance to achieving your experimental results. I think it would also be helpful to emphasize the difference between a stack LSTM and Example 6. Overall I think this paper presents a valuable contribution, though the exposition could be improved and analysis of experimental results expanded.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDRAGNN: A Transition-Based Framework for Dynamically Connected Neural Networks\nIn this work, we present a compact, modular framework for constructing new recurrent neural architectures. Our basic module is a new generic unit, the Transition Based Recurrent Unit (TBRU). In addition to hidden layer activations, TBRUs have discrete state dynamics that allow network connections to be built dynamically as a function of intermediate activations. By connecting multiple TBRUs, we can extend and combine commonly used architectures such as sequence-to-sequence, attention mechanisms, and recursive tree-structured models. A TBRU can also serve as both an {em encoder} for downstream tasks and as a {em decoder} for its own task simultaneously, resulting in more accurate multi-task learning. We call our approach Dynamic Recurrent Acyclic Graphical Neural Networks, or DRAGNN. We show that DRAGNN is significantly more accurate and efficient than seq2seq with attention for syntactic dependency parsing and yields more accurate multi-task learning for extractive summarization tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present a general framework for defining a wide variety of recurrent neural network architectures, including seq2seq models, tree-structured models, attention, and a new family of dynamically connected architectures. The framework defines a new, general-purpose recurrent unit called the TBRU, which takes a transition system, defining and constraining its inputs and outputs, and input function which defines the mapping between raw inputs and fixed-width vector representations, and recurrence function that defines the inputs to each recurrent step as a function of the current state, and an RNN cell that computes the output from the input (fixed and recurrent). Many example instantiations of this framework are provided, including sequential tagging RNNs, Google’s Parsey McParseface parser, encoder/decoder networks, tree LSTMs and less familiar examples that demonstrate the power this framework. The most interesting contribution of this work is the ease by which it can be used to incorporate dynamic recurrent connections through the definition of the transition system. In particular, this paper explores the application of these dynamic connections to syntactic dependency parsing, both as a standalone task, and by multitasking parsing with extractive summarization, using the same compositional phrase representations as features for the parser and summarization (previous work used discrete parse features), which is particularly simple/elegant in this framework. In experimental results, the authors demonstrate that such multitasking leads to more accurate summarization models, and using the framework to incorporate more structure into existing parsing models also leads to increased accuracy with no big-oh efficiency loss (compared with e.g. attention). The “raison d’etre,” in particular the example, perhaps described even more thoroughly/explicitly, should be made as clear as possible as soon as possible. This is the most important contribution, but it gets lost in the description and presentation as a framework — emphasizing that attention, seq2seq, etc can be represented in the framework is distracting and makes it seem less novel than it is. AnonReviewer6 clearly missed this point, as did I in my first pass over the paper. To get this idea across and to emphasize the benefits of this representation, I’d love to see more detailed analysis of these representations and their importance to achieving your experimental results. I think it would also be helpful to emphasize the difference between a stack LSTM and Example 6. Overall I think this paper presents a valuable contribution, though the exposition could be improved and analysis of experimental results expanded.", "role": "assistant" } ]
[ "The authors present a general framework for defining a wide variety of recurrent neural network architectures, including seq2seq models, tree-structured models, attention, and a new family of dynamically connected architectures.", "The framework defines a new, general-purpose recurrent unit called the TBRU, which takes a transition system, defining and constraining its inputs and outputs, and input function which defines the mapping between raw inputs and fixed-width vector representations, and recurrence function that defines the inputs to each recurrent step as a function of the current state, and an RNN cell that computes the output from the input (fixed and recurrent).", "Many example instantiations of this framework are provided, including sequential tagging RNNs, Google’s Parsey McParseface parser, encoder/decoder networks, tree LSTMs and less familiar examples that demonstrate the power this framework.", "The most interesting contribution of this work is the ease by which it can be used to incorporate dynamic recurrent connections through the definition of the transition system.", "In particular, this paper explores the application of these dynamic connections to syntactic dependency parsing, both as a standalone task, and by multitasking parsing with extractive summarization, using the same compositional phrase representations as features for the parser and summarization (previous work used discrete parse features), which is particularly simple/elegant in this framework.", "In experimental results, the authors demonstrate that such multitasking leads to more accurate summarization models, and using the framework to incorporate more structure into existing parsing models also leads to increased accuracy with no big-oh efficiency loss (compared with e.g.", "attention).", "The “raison d’etre,” in particular the example, perhaps described even more thoroughly/explicitly, should be made as clear as possible as soon as possible.", "This is the most important contribution, but it gets lost in the description and presentation as a framework — emphasizing that attention, seq2seq, etc can be represented in the framework is distracting and makes it seem less novel than it is.", "AnonReviewer6 clearly missed this point, as did I in my first pass over the paper.", "To get this idea across and to emphasize the benefits of this representation, I’d love to see more detailed analysis of these representations and their importance to achieving your experimental results.", "I think it would also be helpful to emphasize the difference between a stack LSTM and Example 6.", "Overall I think this paper presents a valuable contribution, though the exposition could be improved and analysis of experimental results expanded." ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 5, "materials_and_methods": 7, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 3, "suggestion_and_solution": 4, "total": 13 }
0.153846
0.076923
0.384615
0.538462
0.230769
0.230769
0.230769
0.307692
13
1
1
1
1
1
1
1
1
2.153846
2.153846
0
Skip-graph: Learning graph embeddings with an encoder-decoder model
In this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and based on supervised techniques. We study a method for obtaining a generic feature representation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processing domain to learn feature representations of sentences. In our proposed approach, we train the encoder-decoder model to predict the random walk sequence of neighboring regions in a graph given a random walk along a particular region. The goal is to map subgraphs — as represented by their random walks — that are structurally and functionally similar to nearby locations in feature space. We evaluate the learned graph vectors using several real-world datasets on the graph classification task. The proposed model is able to achieve good results against state-of- the-art techniques.
This paper proposes an unsupervised graph embedding learning method based on random walk and skip-thought model. They show promising results compared to several competitors on four chemical compound datasets. Strength: 1, The idea of learning the graph embedding by applying skip-thought model to random walk sequences is interesting. 2, The paper is well organized. Weakness: 1, As the current datasets are small (e.g., the average number of nodes per graph is around 30), it would be great to explore larger graph datasets to further investigate the method. 2, Comparisons with recent work like LINE and node2vec are missing. You can compare them easily by applying the same aggregation strategy to their node embeddings. Detailed Questions: 1, The description about how to split the random walk sequence into 3 sub-sequences is missing. Also, the line “l_min >= (n_k - 1), … >= l_max” in section 2.2.2 is a mistake. 2, Can you provide the standard deviations of the 5-fold cross validation in Table 2? I’m curious about how stable the algorithm is.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSkip-graph: Learning graph embeddings with an encoder-decoder model\nIn this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and based on supervised techniques. We study a method for obtaining a generic feature representation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processing domain to learn feature representations of sentences. In our proposed approach, we train the encoder-decoder model to predict the random walk sequence of neighboring regions in a graph given a random walk along a particular region. The goal is to map subgraphs — as represented by their random walks — that are structurally and functionally similar to nearby locations in feature space. We evaluate the learned graph vectors using several real-world datasets on the graph classification task. The proposed model is able to achieve good results against state-of- the-art techniques.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes an unsupervised graph embedding learning method based on random walk and skip-thought model. They show promising results compared to several competitors on four chemical compound datasets. Strength: 1, The idea of learning the graph embedding by applying skip-thought model to random walk sequences is interesting. 2, The paper is well organized. Weakness: 1, As the current datasets are small (e.g., the average number of nodes per graph is around 30), it would be great to explore larger graph datasets to further investigate the method. 2, Comparisons with recent work like LINE and node2vec are missing. You can compare them easily by applying the same aggregation strategy to their node embeddings. Detailed Questions: 1, The description about how to split the random walk sequence into 3 sub-sequences is missing. Also, the line “l_min >= (n_k - 1), … >= l_max” in section 2.2.2 is a mistake. 2, Can you provide the standard deviations of the 5-fold cross validation in Table 2? I’m curious about how stable the algorithm is.", "role": "assistant" } ]
[ "This paper proposes an unsupervised graph embedding learning method based on random walk and skip-thought model.", "They show promising results compared to several competitors on four chemical compound datasets.", "Strength: 1, The idea of learning the graph embedding by applying skip-thought model to random walk sequences is interesting.", "2, The paper is well organized.", "Weakness: 1, As the current datasets are small (e.g., the average number of nodes per graph is around 30), it would be great to explore larger graph datasets to further investigate the method.", "2, Comparisons with recent work like LINE and node2vec are missing.", "You can compare them easily by applying the same aggregation strategy to their node embeddings.", "Detailed Questions: 1, The description about how to split the random walk sequence into 3 sub-sequences is missing.", "Also, the line “l_min >= (n_k - 1), … >= l_max” in section 2.2.2 is a mistake.", "2, Can you provide the standard deviations of the 5-fold cross validation in Table 2?", "I’m curious about how stable the algorithm is." ]
{ "criticism": 3, "example": 2, "importance_and_relevance": 1, "materials_and_methods": 8, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 2, "suggestion_and_solution": 3, "total": 11 }
0.272727
0.181818
0.090909
0.727273
0.272727
0.272727
0.181818
0.272727
11
1
1
1
1
1
1
1
1
2.272727
2.209598
0.06313
Skip-graph: Learning graph embeddings with an encoder-decoder model
In this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and based on supervised techniques. We study a method for obtaining a generic feature representation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processing domain to learn feature representations of sentences. In our proposed approach, we train the encoder-decoder model to predict the random walk sequence of neighboring regions in a graph given a random walk along a particular region. The goal is to map subgraphs — as represented by their random walks — that are structurally and functionally similar to nearby locations in feature space. We evaluate the learned graph vectors using several real-world datasets on the graph classification task. The proposed model is able to achieve good results against state-of- the-art techniques.
Authors take the skip-graph architecture (Kiros 2015) and apply it to classifying labeled graphs (molecular graphs). They do it by creating many sentences by walking the graph randomly, and asking the model to predict previous part and next part from the middle part. Activations of the decoder part of this model on a walk generated from a new graph are used as features for a binary classifier use to predict whether the molecule has anti-cancer properties. Paper is well written, except that evaluation section is missing details of how the embedding is used for actual classification (ie, what classifier is used) Unfortunately I*m not familiar with the dataset and how hard it is to achieve the results they demonstrate, that would be the important factor to weight on the papers acceptance.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSkip-graph: Learning graph embeddings with an encoder-decoder model\nIn this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and based on supervised techniques. We study a method for obtaining a generic feature representation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processing domain to learn feature representations of sentences. In our proposed approach, we train the encoder-decoder model to predict the random walk sequence of neighboring regions in a graph given a random walk along a particular region. The goal is to map subgraphs — as represented by their random walks — that are structurally and functionally similar to nearby locations in feature space. We evaluate the learned graph vectors using several real-world datasets on the graph classification task. The proposed model is able to achieve good results against state-of- the-art techniques.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nAuthors take the skip-graph architecture (Kiros 2015) and apply it to classifying labeled graphs (molecular graphs). They do it by creating many sentences by walking the graph randomly, and asking the model to predict previous part and next part from the middle part. Activations of the decoder part of this model on a walk generated from a new graph are used as features for a binary classifier use to predict whether the molecule has anti-cancer properties. Paper is well written, except that evaluation section is missing details of how the embedding is used for actual classification (ie, what classifier is used) Unfortunately I*m not familiar with the dataset and how hard it is to achieve the results they demonstrate, that would be the important factor to weight on the papers acceptance.", "role": "assistant" } ]
[ "Authors take the skip-graph architecture (Kiros 2015) and apply it to classifying labeled graphs (molecular graphs).", "They do it by creating many sentences by walking the graph randomly, and asking the model to predict previous part and next part from the middle part.", "Activations of the decoder part of this model on a walk generated from a new graph are used as features for a binary classifier use to predict whether the molecule has anti-cancer properties.", "Paper is well written, except that evaluation section is missing details of how the embedding is used for actual classification (ie, what classifier is used) Unfortunately I*m not familiar with the dataset and how hard it is to achieve the results they demonstrate, that would be the important factor to weight on the papers acceptance." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 4, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 4 }
0.25
0
0
1
0.25
0.75
0.25
0
4
1
0
0
1
1
1
1
0
2.5
1.221623
1.278377
DyVEDeep: Dynamic Variable Effort Deep Neural Networks
Deep Neural Networks (DNNs) have advanced the state-of-the-art on a variety of machine learning tasks and are deployed widely in many real-world products. However, the compute and data requirements demanded by large-scale DNNs remains a significant challenge. In this work, we address this challenge in the context of DNN inference. We propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep), which exploit the heterogeneity in the characteristics of inputs to DNNs to improve their compute efficiency while maintaining the same classification accuracy. DyVEDeep equips DNNs with dynamic effort knobs, which in course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while the skipping/approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks on 3 image datasets---MNIST, CIFAR and ImageNet. Across all benchmarks, DyVEDeep achieves 2.1X-2.6X reduction in number of scalar operations, which translates to 1.9X-2.3X performance improvement over a Caffe-based sequential software implementation, for negligible loss in accuracy.
This work proposes a number of approximations for speeding up feed-forward network computations at inference time. Unlike much of the previous work in this area which tries to compress a large network, the authors propose algorithms that decide whether to approximate computations for each particular input example. Speeding up inference is an important problem and this work takes a novel approach. The presentation is exceptionally clear, the diagrams are very beautiful, the ideas are interesting, and the experiments are good. This is a high-quality paper. I especially enjoyed the description of the different methods proposed (SPET, SDSS, SFMA) to exploit patterns in the classifer. My main concern is that the significance of this work is limited because of the additional complexity and computational costs of using these approximations. In the experiments, the DyVEDeep approach was compared to serial implementations of four large classification models --- inference in these models is order of magnitudes faster on systems that support parallelization. I assume that DyVEDeep has little-to-no performance advantage on a system that allows parallelization, and so anyone looking to speed up their inference on a serial system would want to see a comparison between this approach and the model-compression approaches. Thus, I am not sure how much of an impact this approach can have in it*s current state. Suggestions: -I wondered what (if any) bounds could be made on the approximation errors of the proposed methods?
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDyVEDeep: Dynamic Variable Effort Deep Neural Networks\nDeep Neural Networks (DNNs) have advanced the state-of-the-art on a variety of machine learning tasks and are deployed widely in many real-world products. However, the compute and data requirements demanded by large-scale DNNs remains a significant challenge. In this work, we address this challenge in the context of DNN inference. We propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep), which exploit the heterogeneity in the characteristics of inputs to DNNs to improve their compute efficiency while maintaining the same classification accuracy. DyVEDeep equips DNNs with dynamic effort knobs, which in course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while the skipping/approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks on 3 image datasets---MNIST, CIFAR and ImageNet. Across all benchmarks, DyVEDeep achieves 2.1X-2.6X reduction in number of scalar operations, which translates to 1.9X-2.3X performance improvement over a Caffe-based sequential software implementation, for negligible loss in accuracy.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work proposes a number of approximations for speeding up feed-forward network computations at inference time. Unlike much of the previous work in this area which tries to compress a large network, the authors propose algorithms that decide whether to approximate computations for each particular input example. Speeding up inference is an important problem and this work takes a novel approach. The presentation is exceptionally clear, the diagrams are very beautiful, the ideas are interesting, and the experiments are good. This is a high-quality paper. I especially enjoyed the description of the different methods proposed (SPET, SDSS, SFMA) to exploit patterns in the classifer. My main concern is that the significance of this work is limited because of the additional complexity and computational costs of using these approximations. In the experiments, the DyVEDeep approach was compared to serial implementations of four large classification models --- inference in these models is order of magnitudes faster on systems that support parallelization. I assume that DyVEDeep has little-to-no performance advantage on a system that allows parallelization, and so anyone looking to speed up their inference on a serial system would want to see a comparison between this approach and the model-compression approaches. Thus, I am not sure how much of an impact this approach can have in it*s current state. Suggestions: -I wondered what (if any) bounds could be made on the approximation errors of the proposed methods?", "role": "assistant" } ]
[ "This work proposes a number of approximations for speeding up feed-forward network computations at inference time.", "Unlike much of the previous work in this area which tries to compress a large network, the authors propose algorithms that decide whether to approximate computations for each particular input example.", "Speeding up inference is an important problem and this work takes a novel approach.", "The presentation is exceptionally clear, the diagrams are very beautiful, the ideas are interesting, and the experiments are good.", "This is a high-quality paper.", "I especially enjoyed the description of the different methods proposed (SPET, SDSS, SFMA) to exploit patterns in the classifer.", "My main concern is that the significance of this work is limited because of the additional complexity and computational costs of using these approximations.", "In the experiments, the DyVEDeep approach was compared to serial implementations of four large classification models --- inference in these models is order of magnitudes faster on systems that support parallelization.", "I assume that DyVEDeep has little-to-no performance advantage on a system that allows parallelization, and so anyone looking to speed up their inference on a serial system would want to see a comparison between this approach and the model-compression approaches.", "Thus, I am not sure how much of an impact this approach can have in it*s current state.", "Suggestions: -I wondered what (if any) bounds could be made on the approximation errors of the proposed methods?" ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 10, "praise": 4, "presentation_and_reporting": 1, "results_and_discussion": 6, "suggestion_and_solution": 1, "total": 11 }
0.181818
0
0.363636
0.909091
0.363636
0.090909
0.545455
0.090909
11
1
0
1
1
1
1
1
1
2.545455
2.482325
0.06313
Linear Time Complexity Deep Fourier Scattering Network and Extension to Nonlinear Invariants
In this paper we propose a scalable version of a state-of-the-art deterministic time- invariant feature extraction approach based on consecutive changes of basis and nonlinearities, namely, the scattering network. The first focus of the paper is to extend the scattering network to allow the use of higher order nonlinearities as well as extracting nonlinear and Fourier based statistics leading to the required in- variants of any inherently structured input. In order to reach fast convolutions and to leverage the intrinsic structure of wavelets, we derive our complete model in the Fourier domain. In addition of providing fast computations, we are now able to exploit sparse matrices due to extremely high sparsity well localized in the Fourier domain. As a result, we are able to reach a true linear time complexity with in- puts in the Fourier domain allowing fast and energy efficient solutions to machine learning tasks. Validation of the features and computational results will be pre- sented through the use of these invariant coefficients to perform classification on audio recordings of bird songs captured in multiple different soundscapes. In the end, the applicability of the presented solutions to deep artificial neural networks is discussed.
I find the general direction of the work is promising but, in my opinion, the paper has three main drawback. While the motivation and overall idea seem very reasonable, the derivation is not convincing mathematically. The experiments are limited and the presentation needs significant improvement. The writing and wording are in general poorly structured to the point that it is sometimes difficult to follow the proposed ideas. The overall organization needs improvement and the connection between sections is not properly established. The paper could be significantly improved by simply re-writing it. I*m not fully convinced by the motivation for the proposed non-linearity (|c|^2), as described on page 5. The authors argue that (Waldspurger, 2016) suggests that higher order nonlinearities might be beneficial for sparsity. But unless I*m missing something, that work seems to suggest that in the general case higher order nonlinearities can be neglected. Could you please comment on this? On the other hand, adding a second order term to the descriptor seems an interesting direction, as long as stability to small variations is preserved (which should be shown experimentally) The experimental section is rather limited. The paper would be stronger with a thorough numerical evaluation. The presented results, in my opinion, do not show convincingly a clear advantage of the proposed method over a standard implementation of the scattering transform. In order to show the merits of the proposed approach, it would be really helpful to directly compare running times and compression rates. Questions: - Can you show empirically that the proposed higher order nonlinearity produces sparser representations than the complex modulus? Other minor issues: - The proof of Section 2.1, should be preceded by a clear statement in the form of a proposition - *Hadamart* -> Hadamard - *Valid set* -> Validation set - *nonzeros coefficients* -> nonzero coefficients - Figure 3 is difficult to understand. Please provide more details. - Figure 5 is supposed to show a comparison to a standard implementation of the Scattering network, but it doesn*t seem to be such comparison in that figure. Please explain. - Please verify the references. The first reference states *MALLAT*.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLinear Time Complexity Deep Fourier Scattering Network and Extension to Nonlinear Invariants\nIn this paper we propose a scalable version of a state-of-the-art deterministic time- invariant feature extraction approach based on consecutive changes of basis and nonlinearities, namely, the scattering network. The first focus of the paper is to extend the scattering network to allow the use of higher order nonlinearities as well as extracting nonlinear and Fourier based statistics leading to the required in- variants of any inherently structured input. In order to reach fast convolutions and to leverage the intrinsic structure of wavelets, we derive our complete model in the Fourier domain. In addition of providing fast computations, we are now able to exploit sparse matrices due to extremely high sparsity well localized in the Fourier domain. As a result, we are able to reach a true linear time complexity with in- puts in the Fourier domain allowing fast and energy efficient solutions to machine learning tasks. Validation of the features and computational results will be pre- sented through the use of these invariant coefficients to perform classification on audio recordings of bird songs captured in multiple different soundscapes. In the end, the applicability of the presented solutions to deep artificial neural networks is discussed.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI find the general direction of the work is promising but, in my opinion, the paper has three main drawback. While the motivation and overall idea seem very reasonable, the derivation is not convincing mathematically. The experiments are limited and the presentation needs significant improvement. The writing and wording are in general poorly structured to the point that it is sometimes difficult to follow the proposed ideas. The overall organization needs improvement and the connection between sections is not properly established. The paper could be significantly improved by simply re-writing it. I*m not fully convinced by the motivation for the proposed non-linearity (|c|^2), as described on page 5. The authors argue that (Waldspurger, 2016) suggests that higher order nonlinearities might be beneficial for sparsity. But unless I*m missing something, that work seems to suggest that in the general case higher order nonlinearities can be neglected. Could you please comment on this? On the other hand, adding a second order term to the descriptor seems an interesting direction, as long as stability to small variations is preserved (which should be shown experimentally) The experimental section is rather limited. The paper would be stronger with a thorough numerical evaluation. The presented results, in my opinion, do not show convincingly a clear advantage of the proposed method over a standard implementation of the scattering transform. In order to show the merits of the proposed approach, it would be really helpful to directly compare running times and compression rates. Questions: - Can you show empirically that the proposed higher order nonlinearity produces sparser representations than the complex modulus? Other minor issues: - The proof of Section 2.1, should be preceded by a clear statement in the form of a proposition - *Hadamart* -> Hadamard - *Valid set* -> Validation set - *nonzeros coefficients* -> nonzero coefficients - Figure 3 is difficult to understand. Please provide more details. - Figure 5 is supposed to show a comparison to a standard implementation of the Scattering network, but it doesn*t seem to be such comparison in that figure. Please explain. - Please verify the references. The first reference states *MALLAT*.", "role": "assistant" } ]
[ "I find the general direction of the work is promising but, in my opinion, the paper has three main drawback.", "While the motivation and overall idea seem very reasonable, the derivation is not convincing mathematically.", "The experiments are limited and the presentation needs significant improvement.", "The writing and wording are in general poorly structured to the point that it is sometimes difficult to follow the proposed ideas.", "The overall organization needs improvement and the connection between sections is not properly established.", "The paper could be significantly improved by simply re-writing it.", "I*m not fully convinced by the motivation for the proposed non-linearity (|c|^2), as described on page 5.", "The authors argue that (Waldspurger, 2016) suggests that higher order nonlinearities might be beneficial for sparsity.", "But unless I*m missing something, that work seems to suggest that in the general case higher order nonlinearities can be neglected.", "Could you please comment on this?", "On the other hand, adding a second order term to the descriptor seems an interesting direction, as long as stability to small variations is preserved (which should be shown experimentally) The experimental section is rather limited.", "The paper would be stronger with a thorough numerical evaluation.", "The presented results, in my opinion, do not show convincingly a clear advantage of the proposed method over a standard implementation of the scattering transform.", "In order to show the merits of the proposed approach, it would be really helpful to directly compare running times and compression rates.", "Questions: - Can you show empirically that the proposed higher order nonlinearity produces sparser representations than the complex modulus?", "Other minor issues: - The proof of Section 2.1, should be preceded by a clear statement in the form of a proposition - *Hadamart* -> Hadamard - *Valid set* -> Validation set - *nonzeros coefficients* -> nonzero coefficients - Figure 3 is difficult to understand.", "Please provide more details.", "- Figure 5 is supposed to show a comparison to a standard implementation of the Scattering network, but it doesn*t seem to be such comparison in that figure.", "Please explain.", "- Please verify the references.", "The first reference states *MALLAT*." ]
{ "criticism": 9, "example": 3, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 4, "presentation_and_reporting": 9, "results_and_discussion": 4, "suggestion_and_solution": 9, "total": 21 }
0.428571
0.142857
0.095238
0.238095
0.190476
0.428571
0.190476
0.428571
21
1
1
1
1
1
1
1
1
2.142857
1.132782
1.010076
Boosted Residual Networks
In this paper we present a new ensemble method, called Boosted Residual Networks, which builds an ensemble of Residual Networks by growing the member network at each round of boosting. The proposed approach combines recent developements in Residual Networks - a method for creating very deep networks by including a shortcut layer between different groups of layers - with the Deep Incremental Boosting, which has been proposed as a methodology to train fast ensembles of networks of increasing depth through the use of boosting. We demonstrate that the synergy of Residual Networks and Deep Incremental Boosting has better potential than simply boosting a Residual Network of fixed structure or using the equivalent Deep Incremental Boosting without the shortcut layers.
The authors mention that they are not aiming to have SOTA results. However, that an ensemble of resnets has lower performance than some of single network results, indicates that further experimentation preferably on larger datasets is necessary. The literature review could at least mention some existing works such as wide resnets https://arxiv.org/abs/1605.07146 or the ones that use knowledge distillation for ensemble of networks for comparison on cifar. While the manuscript is well-written and the idea is novel, it needs to be extended with experiments.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBoosted Residual Networks\nIn this paper we present a new ensemble method, called Boosted Residual Networks, which builds an ensemble of Residual Networks by growing the member network at each round of boosting. The proposed approach combines recent developements in Residual Networks - a method for creating very deep networks by including a shortcut layer between different groups of layers - with the Deep Incremental Boosting, which has been proposed as a methodology to train fast ensembles of networks of increasing depth through the use of boosting. We demonstrate that the synergy of Residual Networks and Deep Incremental Boosting has better potential than simply boosting a Residual Network of fixed structure or using the equivalent Deep Incremental Boosting without the shortcut layers.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors mention that they are not aiming to have SOTA results. However, that an ensemble of resnets has lower performance than some of single network results, indicates that further experimentation preferably on larger datasets is necessary. The literature review could at least mention some existing works such as wide resnets https://arxiv.org/abs/1605.07146 or the ones that use knowledge distillation for ensemble of networks for comparison on cifar. While the manuscript is well-written and the idea is novel, it needs to be extended with experiments.", "role": "assistant" } ]
[ "The authors mention that they are not aiming to have SOTA results.", "However, that an ensemble of resnets has lower performance than some of single network results, indicates that further experimentation preferably on larger datasets is necessary.", "The literature review could at least mention some existing works such as wide resnets https://arxiv.org/abs/1605.07146 or the ones that use knowledge distillation for ensemble of networks for comparison on cifar.", "While the manuscript is well-written and the idea is novel, it needs to be extended with experiments." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 3, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 3, "total": 4 }
0
0
0
0.75
0.25
0.25
0.5
0.75
4
0
0
0
1
1
1
1
1
2.5
1.221623
1.278377
Boosted Residual Networks
In this paper we present a new ensemble method, called Boosted Residual Networks, which builds an ensemble of Residual Networks by growing the member network at each round of boosting. The proposed approach combines recent developements in Residual Networks - a method for creating very deep networks by including a shortcut layer between different groups of layers - with the Deep Incremental Boosting, which has been proposed as a methodology to train fast ensembles of networks of increasing depth through the use of boosting. We demonstrate that the synergy of Residual Networks and Deep Incremental Boosting has better potential than simply boosting a Residual Network of fixed structure or using the equivalent Deep Incremental Boosting without the shortcut layers.
The paper under consideration proposes a set of procedures for incrementally expanding a residual network by adding layers via a boosting criterion. The main barrier to publication is the weak empirical validation. The tasks considered are quite small scale in 2016 (and MNIST with a convolutional net is basically an uninteresting test by this point). The paper doesn*t compare to the literature, and CIFAR-10 results fail to improve upon rather simple, single-network published baselines (Springenberg et al, 2015 for example, obtains 92% without data augmentation) and I*m pretty sure there*s a simple ResNet result somewhere that outshines these too. The CIFAR100 results are a little bit interesting as they are better than I*m used to seeing (I haven*t done a recent literature crawl), and this is unsurprising -- you*d expect ensembles to do well when there*s a dearth of labeled training data, and here there are only a few hundred per label. But then it*s typical on both CIFAR10 and CIFAR100 to use simple data augmentation schemes which aren*t employed here, and these and other forms of regularization are a simpler alternative to a complicated iterative augmentation scheme like this. It*d be easier to sell this method either as an option for scarce labeled datasets where data augmentation is non-trivial (but then for most image-related applications, random crops and reflections are easy and valid), but that would necessitate different benchmarks, and comparison against simpler methods like said data augmentation, dropout (especially, due to the ensemble interpretation), and so on.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBoosted Residual Networks\nIn this paper we present a new ensemble method, called Boosted Residual Networks, which builds an ensemble of Residual Networks by growing the member network at each round of boosting. The proposed approach combines recent developements in Residual Networks - a method for creating very deep networks by including a shortcut layer between different groups of layers - with the Deep Incremental Boosting, which has been proposed as a methodology to train fast ensembles of networks of increasing depth through the use of boosting. We demonstrate that the synergy of Residual Networks and Deep Incremental Boosting has better potential than simply boosting a Residual Network of fixed structure or using the equivalent Deep Incremental Boosting without the shortcut layers.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper under consideration proposes a set of procedures for incrementally expanding a residual network by adding layers via a boosting criterion. The main barrier to publication is the weak empirical validation. The tasks considered are quite small scale in 2016 (and MNIST with a convolutional net is basically an uninteresting test by this point). The paper doesn*t compare to the literature, and CIFAR-10 results fail to improve upon rather simple, single-network published baselines (Springenberg et al, 2015 for example, obtains 92% without data augmentation) and I*m pretty sure there*s a simple ResNet result somewhere that outshines these too. The CIFAR100 results are a little bit interesting as they are better than I*m used to seeing (I haven*t done a recent literature crawl), and this is unsurprising -- you*d expect ensembles to do well when there*s a dearth of labeled training data, and here there are only a few hundred per label. But then it*s typical on both CIFAR10 and CIFAR100 to use simple data augmentation schemes which aren*t employed here, and these and other forms of regularization are a simpler alternative to a complicated iterative augmentation scheme like this. It*d be easier to sell this method either as an option for scarce labeled datasets where data augmentation is non-trivial (but then for most image-related applications, random crops and reflections are easy and valid), but that would necessitate different benchmarks, and comparison against simpler methods like said data augmentation, dropout (especially, due to the ensemble interpretation), and so on.", "role": "assistant" } ]
[ "The paper under consideration proposes a set of procedures for incrementally expanding a residual network by adding layers via a boosting criterion.", "The main barrier to publication is the weak empirical validation.", "The tasks considered are quite small scale in 2016 (and MNIST with a convolutional net is basically an uninteresting test by this point).", "The paper doesn*t compare to the literature, and CIFAR-10 results fail to improve upon rather simple, single-network published baselines (Springenberg et al, 2015 for example, obtains 92% without data augmentation) and I*m pretty sure there*s a simple ResNet result somewhere that outshines these too.", "The CIFAR100 results are a little bit interesting as they are better than I*m used to seeing (I haven*t done a recent literature crawl), and this is unsurprising -- you*d expect ensembles to do well when there*s a dearth of labeled training data, and here there are only a few hundred per label.", "But then it*s typical on both CIFAR10 and CIFAR100 to use simple data augmentation schemes which aren*t employed here, and these and other forms of regularization are a simpler alternative to a complicated iterative augmentation scheme like this.", "It*d be easier to sell this method either as an option for scarce labeled datasets where data augmentation is non-trivial (but then for most image-related applications, random crops and reflections are easy and valid), but that would necessitate different benchmarks, and comparison against simpler methods like said data augmentation, dropout (especially, due to the ensemble interpretation), and so on." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 7, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 7 }
0.571429
0
0
1
0.142857
0
0.285714
0.142857
7
1
0
0
1
1
0
1
1
2.142857
1.57469
0.568167
Multi-task learning with deep model based reinforcement learning
In recent years, model-free methods that use deep learning have achieved great success in many different reinforcement learning environments. Most successful approaches focus on solving a single task, while multi-task reinforcement learning remains an open problem. In this paper, we present a model based approach to deep reinforcement learning which we use to solve different tasks simultaneously. We show that our approach not only does not degrade but actually benefits from learning multiple tasks. For our model, we also present a new kind of recurrent neural network inspired by residual networks that decouples memory from computation allowing to model complex environments that do not require lots of memory. The code will be released before ICLR 2017.
This paper proposes a model-based reinforcement learning approach focusing on predicting future rewards given a current state and future actions. This is achieved with a *residual recurrent neural network*, that outputs the expected reward increase at various time steps in the future. To demonstrate the usefulness of this approach, experiments are conducted on Atari games, with a simple playing strategy that consists in evaluating random sequences of moves and picking the one with highest expected reward (and low enough chance of dying). Interestingly, out of the 3 games tested, one of them exhibits better performance when the agent is trained in a multitask setting (i.e. learning all games simultaneously), hinting that transfer learning is occurring. This submission is easy enough to read, and the reward prediction architecture looks like an original and sound idea. There are however several points that I believe prevent this work from reaching the ICLR bar, as detailed below. The first issue is the discrepancy between the algorithm proposed in Section 3 vs its actual implementation in Section 4 (experiments): in Section 3 the output is supposed to be the expected accumulated reward in future time steps (as a single scalar), while in experiments it is instead two numbers, one which is the probability of dying and another one which is the probability of having a higher score without dying. This might work better, but it also means the idea as presented in the main body of the paper is not actually evaluated (and I guess it would not work well, as otherwise why implement it differently?) In addition, the experimental results are quite limited: only on 3 games that were hand-picked to be easy enough, and no comparison to other RL techniques (DQN & friends). I realize that the main focus of the paper is not about exhibiting state-of-the-art results, since the policy being used is only a simple heuristic to show that the model predictions can ne used to drive decisions. That being said, I think experiments should have tried to demonstrate how to use this model to obtain better reinforcement learning algorithms: there is actually no reinforcement learning done here, since the model is a supervised algorithm, used in a manually-defined hardcoded policy. Another question that could have been addressed (but was not) in the experiments is how good these predictions are (e.g. classification error on dying probability, MSE on future rewards, ...), compared to simpler baselines. Finally, the paper*s *previous work* section is too limited, focusing only on DQN and in particular saying very little on the topic of model-based RL. I think a paper like for instance *Action-Conditional Video Prediction using Deep Networks in Atari Games* should have been an obvious *must cite*. Minor comments: - Notations are unusual, with *a* denoting a state rather than an action, this is potentially confusing and I see no reason to stray away from standard RL notations - Using a dot for tensor concatenation is not a great choice either, since the dot usually indicates a dot product - The r_i in 3.2.2 is a residual that has nothing to do with r_i the reward - c_i is defined as *The control that was performed at time i*, but instead it seems to be the control performed at time i-1 - There is a recurrent confusion between mean and median in 3.2.2 - x should not be used in Observation 1 since the x from Fig. 3 does not go through layer normalization - The inequality in Observation 1 should be about |x_i|, not x_i - Observation 1 (with its proof) takes too much space for such a simple result - In 3.2.3 the first r_j should be r_i - The probability of dying comes out of nowhere in 3.3, since we do not know yet it will be an output of the model - *Our approach is not able to learn from good strategies* => did you mean **only* from good strategies*? - Please say that in Fig. 4 *fc* means *fully connected* - It would be nice also to say how the architecture of Fig. 4 differs from the classical DQN architecture from Mnih et al (2015) - Please clarify r_j2 as per your answer in OpenReview comments - Table 3 says *After one iteration* but has *PRL Iteration 2* in it, which is confusing - *Figure 5 shows that not only there is no degradation in Pong and Demon Attack*=> to me it seems to be a bit worse, actually - *A model that has learned only from random play is able to play at least 7 times better.* => not clear where this 7 comes from - *Demon Attack*s plot in Figure 5c shows a potential problem we mentioned earlier* => where was it mentioned?
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMulti-task learning with deep model based reinforcement learning\nIn recent years, model-free methods that use deep learning have achieved great success in many different reinforcement learning environments. Most successful approaches focus on solving a single task, while multi-task reinforcement learning remains an open problem. In this paper, we present a model based approach to deep reinforcement learning which we use to solve different tasks simultaneously. We show that our approach not only does not degrade but actually benefits from learning multiple tasks. For our model, we also present a new kind of recurrent neural network inspired by residual networks that decouples memory from computation allowing to model complex environments that do not require lots of memory. The code will be released before ICLR 2017.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a model-based reinforcement learning approach focusing on predicting future rewards given a current state and future actions. This is achieved with a *residual recurrent neural network*, that outputs the expected reward increase at various time steps in the future. To demonstrate the usefulness of this approach, experiments are conducted on Atari games, with a simple playing strategy that consists in evaluating random sequences of moves and picking the one with highest expected reward (and low enough chance of dying). Interestingly, out of the 3 games tested, one of them exhibits better performance when the agent is trained in a multitask setting (i.e. learning all games simultaneously), hinting that transfer learning is occurring. This submission is easy enough to read, and the reward prediction architecture looks like an original and sound idea. There are however several points that I believe prevent this work from reaching the ICLR bar, as detailed below. The first issue is the discrepancy between the algorithm proposed in Section 3 vs its actual implementation in Section 4 (experiments): in Section 3 the output is supposed to be the expected accumulated reward in future time steps (as a single scalar), while in experiments it is instead two numbers, one which is the probability of dying and another one which is the probability of having a higher score without dying. This might work better, but it also means the idea as presented in the main body of the paper is not actually evaluated (and I guess it would not work well, as otherwise why implement it differently?) In addition, the experimental results are quite limited: only on 3 games that were hand-picked to be easy enough, and no comparison to other RL techniques (DQN & friends). I realize that the main focus of the paper is not about exhibiting state-of-the-art results, since the policy being used is only a simple heuristic to show that the model predictions can ne used to drive decisions. That being said, I think experiments should have tried to demonstrate how to use this model to obtain better reinforcement learning algorithms: there is actually no reinforcement learning done here, since the model is a supervised algorithm, used in a manually-defined hardcoded policy. Another question that could have been addressed (but was not) in the experiments is how good these predictions are (e.g. classification error on dying probability, MSE on future rewards, ...), compared to simpler baselines. Finally, the paper*s *previous work* section is too limited, focusing only on DQN and in particular saying very little on the topic of model-based RL. I think a paper like for instance *Action-Conditional Video Prediction using Deep Networks in Atari Games* should have been an obvious *must cite*. Minor comments: - Notations are unusual, with *a* denoting a state rather than an action, this is potentially confusing and I see no reason to stray away from standard RL notations - Using a dot for tensor concatenation is not a great choice either, since the dot usually indicates a dot product - The r_i in 3.2.2 is a residual that has nothing to do with r_i the reward - c_i is defined as *The control that was performed at time i*, but instead it seems to be the control performed at time i-1 - There is a recurrent confusion between mean and median in 3.2.2 - x should not be used in Observation 1 since the x from Fig. 3 does not go through layer normalization - The inequality in Observation 1 should be about |x_i|, not x_i - Observation 1 (with its proof) takes too much space for such a simple result - In 3.2.3 the first r_j should be r_i - The probability of dying comes out of nowhere in 3.3, since we do not know yet it will be an output of the model - *Our approach is not able to learn from good strategies* => did you mean **only* from good strategies*? - Please say that in Fig. 4 *fc* means *fully connected* - It would be nice also to say how the architecture of Fig. 4 differs from the classical DQN architecture from Mnih et al (2015) - Please clarify r_j2 as per your answer in OpenReview comments - Table 3 says *After one iteration* but has *PRL Iteration 2* in it, which is confusing - *Figure 5 shows that not only there is no degradation in Pong and Demon Attack*=> to me it seems to be a bit worse, actually - *A model that has learned only from random play is able to play at least 7 times better.* => not clear where this 7 comes from - *Demon Attack*s plot in Figure 5c shows a potential problem we mentioned earlier* => where was it mentioned?", "role": "assistant" } ]
[ "This paper proposes a model-based reinforcement learning approach focusing on predicting future rewards given a current state and future actions.", "This is achieved with a *residual recurrent neural network*, that outputs the expected reward increase at various time steps in the future.", "To demonstrate the usefulness of this approach, experiments are conducted on Atari games, with a simple playing strategy that consists in evaluating random sequences of moves and picking the one with highest expected reward (and low enough chance of dying).", "Interestingly, out of the 3 games tested, one of them exhibits better performance when the agent is trained in a multitask setting (i.e.", "learning all games simultaneously), hinting that transfer learning is occurring.", "This submission is easy enough to read, and the reward prediction architecture looks like an original and sound idea.", "There are however several points that I believe prevent this work from reaching the ICLR bar, as detailed below.", "The first issue is the discrepancy between the algorithm proposed in Section 3 vs its actual implementation in Section 4 (experiments): in Section 3 the output is supposed to be the expected accumulated reward in future time steps (as a single scalar), while in experiments it is instead two numbers, one which is the probability of dying and another one which is the probability of having a higher score without dying.", "This might work better, but it also means the idea as presented in the main body of the paper is not actually evaluated (and I guess it would not work well, as otherwise why implement it differently?)", "In addition, the experimental results are quite limited: only on 3 games that were hand-picked to be easy enough, and no comparison to other RL techniques (DQN & friends).", "I realize that the main focus of the paper is not about exhibiting state-of-the-art results, since the policy being used is only a simple heuristic to show that the model predictions can ne used to drive decisions.", "That being said, I think experiments should have tried to demonstrate how to use this model to obtain better reinforcement learning algorithms: there is actually no reinforcement learning done here, since the model is a supervised algorithm, used in a manually-defined hardcoded policy.", "Another question that could have been addressed (but was not) in the experiments is how good these predictions are (e.g.", "classification error on dying probability, MSE on future rewards, ...), compared to simpler baselines.", "Finally, the paper*s *previous work* section is too limited, focusing only on DQN and in particular saying very little on the topic of model-based RL.", "I think a paper like for instance *Action-Conditional Video Prediction using Deep Networks in Atari Games* should have been an obvious *must cite*.", "Minor comments: - Notations are unusual, with *a* denoting a state rather than an action, this is potentially confusing and I see no reason to stray away from standard RL notations - Using a dot for tensor concatenation is not a great choice either, since the dot usually indicates a dot product - The r_i in 3.2.2 is a residual that has nothing to do with r_i the reward - c_i is defined as *The control that was performed at time i*, but instead it seems to be the control performed at time i-1 - There is a recurrent confusion between mean and median in 3.2.2 - x should not be used in Observation 1 since the x from Fig.", "3 does not go through layer normalization - The inequality in Observation 1 should be about |x_i|, not x_i - Observation 1 (with its proof) takes too much space for such a simple result - In 3.2.3 the first r_j should be r_i - The probability of dying comes out of nowhere in 3.3, since we do not know yet it will be an output of the model - *Our approach is not able to learn from good strategies* => did you mean **only* from good strategies*?", "- Please say that in Fig.", "4 *fc* means *fully connected* - It would be nice also to say how the architecture of Fig.", "4 differs from the classical DQN architecture from Mnih et al (2015) - Please clarify r_j2 as per your answer in OpenReview comments - Table 3 says *After one iteration* but has *PRL Iteration 2* in it, which is confusing - *Figure 5 shows that not only there is no degradation in Pong and Demon Attack*=> to me it seems to be a bit worse, actually - *A model that has learned only from random play is able to play at least 7 times better.", "* => not clear where this 7 comes from - *Demon Attack*s plot in Figure 5c shows a potential problem we mentioned earlier* => where was it mentioned?" ]
{ "criticism": 7, "example": 4, "importance_and_relevance": 3, "materials_and_methods": 16, "praise": 3, "presentation_and_reporting": 7, "results_and_discussion": 6, "suggestion_and_solution": 8, "total": 22 }
0.318182
0.181818
0.136364
0.727273
0.136364
0.318182
0.272727
0.363636
22
1
1
1
1
1
1
1
1
2.454545
1.176169
1.278377
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
The paper evaluates recent development in competitive ILSVRC CNN architectures from the perspective of resource utilization. It is clear that a lot of work has been put into the evaluations. The findings are well presented and the topic itself is important. However, most of the results are not surprising to people working with CNNs on a regular basis. And even if they are, I am not convinced about their practical value. It is hard to tell what we actually learn from these findings when approaching new problems with computational constraints or when in production settings. In my opinion, this is mainly because the paper does not discuss realistic circumstances. Main concerns: 1) The evaluation does not tell me much for realistic scenarios, that mostly involve fine-tuning networks, as ILSVRC is just a starting point in most cases. VGG for instance really shines for fine-tuning, but it is cumbersome to train from scratch. And VGG works well for compression, too. So possibly it is a very good choice if these by now standard steps are taken into account. Such questions are of high practical relevance! 2) Compressed networks have a much higher acc/parameter density, so comparison how well models can be compressed is important, or at least comparing to some of the most well-known and publicly available compressed networks. 3) There is no analysis on the actual topology of the networks and where the bottlenecks lie. This would be very useful to have as well. Minor concern: 1) Why did the authors choose to use batch normalization in NiN and AlexNet?
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn Analysis of Deep Neural Network Models for Practical Applications\nSince the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper evaluates recent development in competitive ILSVRC CNN architectures from the perspective of resource utilization. It is clear that a lot of work has been put into the evaluations. The findings are well presented and the topic itself is important. However, most of the results are not surprising to people working with CNNs on a regular basis. And even if they are, I am not convinced about their practical value. It is hard to tell what we actually learn from these findings when approaching new problems with computational constraints or when in production settings. In my opinion, this is mainly because the paper does not discuss realistic circumstances. Main concerns: 1) The evaluation does not tell me much for realistic scenarios, that mostly involve fine-tuning networks, as ILSVRC is just a starting point in most cases. VGG for instance really shines for fine-tuning, but it is cumbersome to train from scratch. And VGG works well for compression, too. So possibly it is a very good choice if these by now standard steps are taken into account. Such questions are of high practical relevance! 2) Compressed networks have a much higher acc/parameter density, so comparison how well models can be compressed is important, or at least comparing to some of the most well-known and publicly available compressed networks. 3) There is no analysis on the actual topology of the networks and where the bottlenecks lie. This would be very useful to have as well. Minor concern: 1) Why did the authors choose to use batch normalization in NiN and AlexNet?", "role": "assistant" } ]
[ "The paper evaluates recent development in competitive ILSVRC CNN architectures from the perspective of resource utilization.", "It is clear that a lot of work has been put into the evaluations.", "The findings are well presented and the topic itself is important.", "However, most of the results are not surprising to people working with CNNs on a regular basis.", "And even if they are, I am not convinced about their practical value.", "It is hard to tell what we actually learn from these findings when approaching new problems with computational constraints or when in production settings.", "In my opinion, this is mainly because the paper does not discuss realistic circumstances.", "Main concerns: 1) The evaluation does not tell me much for realistic scenarios, that mostly involve fine-tuning networks, as ILSVRC is just a starting point in most cases.", "VGG for instance really shines for fine-tuning, but it is cumbersome to train from scratch.", "And VGG works well for compression, too.", "So possibly it is a very good choice if these by now standard steps are taken into account.", "Such questions are of high practical relevance!", "2) Compressed networks have a much higher acc/parameter density, so comparison how well models can be compressed is important, or at least comparing to some of the most well-known and publicly available compressed networks.", "3) There is no analysis on the actual topology of the networks and where the bottlenecks lie.", "This would be very useful to have as well.", "Minor concern: 1) Why did the authors choose to use batch normalization in NiN and AlexNet?" ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 6, "materials_and_methods": 9, "praise": 6, "presentation_and_reporting": 0, "results_and_discussion": 5, "suggestion_and_solution": 1, "total": 16 }
0.4375
0
0.375
0.5625
0.375
0
0.3125
0.0625
16
1
0
1
1
1
0
1
1
2.125
1.982958
0.142042
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
The authors did solid work in collecting all the reported data. However, most findings don*t seem to be too surprising to me: - Finding #1 mainly shows that all architectures and batch sizes manage to utilize the GPU fully (or to the same percentage). - Regarding Finding #2, I agree that from a linear relationship in Figure 9 you could conclude said hyperbolic relationship. However, for this finding to be relevant, it has to hold especially for the latest generations of models. These cluster in the upper left corner of Figure 9 and on their own do not seem to show too much of a linear behaviour. Therefore I think there is not enough evidence to conclude asymptotic hyperbolic behaviour: For this the linear behaviour would have to be the stronger, the more models approach the upper left corner. - Finding #3 seems to be a simple conclusion from finding #1: As long as slower models are better and faster models do draw the same power, finding #3 holds. - Finding #4 is again similar to finding #1: If all architectures manage to fully utilize the GPU, inference time should be proportional to the number of operations. Maybe the most interesting finding would be that all tested models seem to use the same percentage of computational resources available on the GPU, while one might expect that more complex models don*t manage to utilize as much computational resources due to inter-dependencies. However actual GPU utilization was not evaluated and as the authors choose to use an older GPU, one would expect that all models manage to make use of all available computational power. Additionally, I think these findings would have to be put in relation with compressing techniques or tested on actual production networks to be of more interest.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn Analysis of Deep Neural Network Models for Practical Applications\nSince the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors did solid work in collecting all the reported data. However, most findings don*t seem to be too surprising to me: - Finding #1 mainly shows that all architectures and batch sizes manage to utilize the GPU fully (or to the same percentage). - Regarding Finding #2, I agree that from a linear relationship in Figure 9 you could conclude said hyperbolic relationship. However, for this finding to be relevant, it has to hold especially for the latest generations of models. These cluster in the upper left corner of Figure 9 and on their own do not seem to show too much of a linear behaviour. Therefore I think there is not enough evidence to conclude asymptotic hyperbolic behaviour: For this the linear behaviour would have to be the stronger, the more models approach the upper left corner. - Finding #3 seems to be a simple conclusion from finding #1: As long as slower models are better and faster models do draw the same power, finding #3 holds. - Finding #4 is again similar to finding #1: If all architectures manage to fully utilize the GPU, inference time should be proportional to the number of operations. Maybe the most interesting finding would be that all tested models seem to use the same percentage of computational resources available on the GPU, while one might expect that more complex models don*t manage to utilize as much computational resources due to inter-dependencies. However actual GPU utilization was not evaluated and as the authors choose to use an older GPU, one would expect that all models manage to make use of all available computational power. Additionally, I think these findings would have to be put in relation with compressing techniques or tested on actual production networks to be of more interest.", "role": "assistant" } ]
[ "The authors did solid work in collecting all the reported data.", "However, most findings don*t seem to be too surprising to me: - Finding #1 mainly shows that all architectures and batch sizes manage to utilize the GPU fully (or to the same percentage).", "- Regarding Finding #2, I agree that from a linear relationship in Figure 9 you could conclude said hyperbolic relationship.", "However, for this finding to be relevant, it has to hold especially for the latest generations of models.", "These cluster in the upper left corner of Figure 9 and on their own do not seem to show too much of a linear behaviour.", "Therefore I think there is not enough evidence to conclude asymptotic hyperbolic behaviour: For this the linear behaviour would have to be the stronger, the more models approach the upper left corner.", "- Finding #3 seems to be a simple conclusion from finding #1: As long as slower models are better and faster models do draw the same power, finding #3 holds.", "- Finding #4 is again similar to finding #1: If all architectures manage to fully utilize the GPU, inference time should be proportional to the number of operations.", "Maybe the most interesting finding would be that all tested models seem to use the same percentage of computational resources available on the GPU, while one might expect that more complex models don*t manage to utilize as much computational resources due to inter-dependencies.", "However actual GPU utilization was not evaluated and as the authors choose to use an older GPU, one would expect that all models manage to make use of all available computational power.", "Additionally, I think these findings would have to be put in relation with compressing techniques or tested on actual production networks to be of more interest." ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 9, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 8, "suggestion_and_solution": 5, "total": 11 }
0.272727
0.090909
0.181818
0.818182
0.090909
0
0.727273
0.454545
11
1
1
1
1
1
0
1
1
2.636364
2.573234
0.06313
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
A few issues with this paper: 1- I find finding #2 trivial and unworthy of mention, but the author don*t seem to agree with me that it is. See discussions. 2- Finding #1 relies on Fig #4, which appears very noisy and doesn*t provide any error analysis. It makes me question how robust this finding is. One would have naively expected the power usage trend to mirror Fig #3, but given the level of noise, I can*t convince myself whether the null hypothesis of there being no dependency between batch size and power consumption is more likely than the alternative. 3- Paper is unfriendly to colorblind readers (or those with B/W printers) Overall, this paper is a reasonable review of where we are in terms of SOTA vision architectures, but doesn*t provide much new insight. I found most interesting the clear illustration that VGG models stand out in terms of being a bad tradeoff in resource-constrained environments (too many researchers are tempted to benchmark their model compression algorithm on VGG-class models because that*s always where one can show 10x improvements without doing much.)
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn Analysis of Deep Neural Network Models for Practical Applications\nSince the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nA few issues with this paper: 1- I find finding #2 trivial and unworthy of mention, but the author don*t seem to agree with me that it is. See discussions. 2- Finding #1 relies on Fig #4, which appears very noisy and doesn*t provide any error analysis. It makes me question how robust this finding is. One would have naively expected the power usage trend to mirror Fig #3, but given the level of noise, I can*t convince myself whether the null hypothesis of there being no dependency between batch size and power consumption is more likely than the alternative. 3- Paper is unfriendly to colorblind readers (or those with B/W printers) Overall, this paper is a reasonable review of where we are in terms of SOTA vision architectures, but doesn*t provide much new insight. I found most interesting the clear illustration that VGG models stand out in terms of being a bad tradeoff in resource-constrained environments (too many researchers are tempted to benchmark their model compression algorithm on VGG-class models because that*s always where one can show 10x improvements without doing much.)", "role": "assistant" } ]
[ "A few issues with this paper: 1- I find finding #2 trivial and unworthy of mention, but the author don*t seem to agree with me that it is.", "See discussions.", "2- Finding #1 relies on Fig #4, which appears very noisy and doesn*t provide any error analysis.", "It makes me question how robust this finding is.", "One would have naively expected the power usage trend to mirror Fig #3, but given the level of noise, I can*t convince myself whether the null hypothesis of there being no dependency between batch size and power consumption is more likely than the alternative.", "3- Paper is unfriendly to colorblind readers (or those with B/W printers) Overall, this paper is a reasonable review of where we are in terms of SOTA vision architectures, but doesn*t provide much new insight.", "I found most interesting the clear illustration that VGG models stand out in terms of being a bad tradeoff in resource-constrained environments (too many researchers are tempted to benchmark their model compression algorithm on VGG-class models because that*s always where one can show 10x improvements without doing much.)" ]
{ "criticism": 4, "example": 2, "importance_and_relevance": 1, "materials_and_methods": 4, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 4, "suggestion_and_solution": 0, "total": 7 }
0.571429
0.285714
0.142857
0.571429
0.142857
0.142857
0.571429
0
7
1
1
1
1
1
1
1
0
2.428571
1.860404
0.568167
Playing SNES in the Retro Learning Environment
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.
This paper presents a valuable new collection of video game benchmarks, in an extendable framework, and establishes initial baselines on a few of them. Reward structures: for how many of the possible games have you implemented the means to extract scores and incremental reward structures? From the github repo it looks like about 10 -- do you plan to add more, and when? “rivalry” training: this is one of the weaker components of the paper, and it should probably be emphasised less. On this topic, there is a vast body of (uncited) multi-agent literature, it is a well-studied problem setup (more so than RL itself). To avoid controversy, I would recommend not claiming any novel contribution on the topic (I don’t think that you really invented “a new method to train an agent by enabling it to train against several opponents” nor “a new benchmarking technique for agents evaluation, by enabling them to compete against each other, rather than playing against the in-game AI”). Instead, just explain that you have established single-agent and multi-agent baselines for your new benchmark suite. Your definition of Q-function (“predicts the score at the end of the game given the current state and selected action”) is incorrect. It should read something like: it estimates the cumulative discounted reward that can be obtained from state s, starting with action a (and then following a certain policy). Minor: * Eq (1): the Q-net inside the max() is the target network, with different parameters theta’ * the Du et al. reference is missing the year * some of the other references should point at the corresponding published papers instead of the arxiv versions
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPlaying SNES in the Retro Learning Environment\nMastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a valuable new collection of video game benchmarks, in an extendable framework, and establishes initial baselines on a few of them. Reward structures: for how many of the possible games have you implemented the means to extract scores and incremental reward structures? From the github repo it looks like about 10 -- do you plan to add more, and when? “rivalry” training: this is one of the weaker components of the paper, and it should probably be emphasised less. On this topic, there is a vast body of (uncited) multi-agent literature, it is a well-studied problem setup (more so than RL itself). To avoid controversy, I would recommend not claiming any novel contribution on the topic (I don’t think that you really invented “a new method to train an agent by enabling it to train against several opponents” nor “a new benchmarking technique for agents evaluation, by enabling them to compete against each other, rather than playing against the in-game AI”). Instead, just explain that you have established single-agent and multi-agent baselines for your new benchmark suite. Your definition of Q-function (“predicts the score at the end of the game given the current state and selected action”) is incorrect. It should read something like: it estimates the cumulative discounted reward that can be obtained from state s, starting with action a (and then following a certain policy). Minor: * Eq (1): the Q-net inside the max() is the target network, with different parameters theta’ * the Du et al. reference is missing the year * some of the other references should point at the corresponding published papers instead of the arxiv versions", "role": "assistant" } ]
[ "This paper presents a valuable new collection of video game benchmarks, in an extendable framework, and establishes initial baselines on a few of them.", "Reward structures: for how many of the possible games have you implemented the means to extract scores and incremental reward structures?", "From the github repo it looks like about 10 -- do you plan to add more, and when?", "“rivalry” training: this is one of the weaker components of the paper, and it should probably be emphasised less.", "On this topic, there is a vast body of (uncited) multi-agent literature, it is a well-studied problem setup (more so than RL itself).", "To avoid controversy, I would recommend not claiming any novel contribution on the topic (I don’t think that you really invented “a new method to train an agent by enabling it to train against several opponents” nor “a new benchmarking technique for agents evaluation, by enabling them to compete against each other, rather than playing against the in-game AI”).", "Instead, just explain that you have established single-agent and multi-agent baselines for your new benchmark suite.", "Your definition of Q-function (“predicts the score at the end of the game given the current state and selected action”) is incorrect.", "It should read something like: it estimates the cumulative discounted reward that can be obtained from state s, starting with action a (and then following a certain policy).", "Minor: * Eq (1): the Q-net inside the max() is the target network, with different parameters theta’ * the Du et al.", "reference is missing the year * some of the other references should point at the corresponding published papers instead of the arxiv versions" ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 8, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 6, "total": 11 }
0.181818
0
0.272727
0.727273
0.181818
0.272727
0.090909
0.545455
11
1
0
1
1
1
1
1
1
2.272727
2.209598
0.06313
LipNet: End-to-End Sentence-level Lipreading
Lipreading is the task of decoding text from the movement of a speaker*s mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman, 2016a). However, existing work on models trained end-to-end perform only word classification, rather than sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 95.2% accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4% word-level state-of-the-art accuracy (Gergen et al., 2016).
The authors present a well thought out and constructed system for performing lipreading. The primary novelty is the end-to-end nature of the system for lipreading, with the sentence-level prediction also differentiating this with prior work. The described neural network architecture contains convolutional and recurrent layers with a CTC sequence loss at the end, and beam search decoding with an LM is done to obtain best results. Performance is evaluated on the GRID dataset, with some saliency map and confusion matrix analysis provided as well. Overall, the work seems of high quality and clearly written with detailed explanations. The final results and analysis appear good as well. One gripe is that that the novelty lies in the choice of application domain as opposed to the methods. Lack of word-level comparisons also makes it difficult to determine the importance of using sentence-level information vs. choices in model architecture/decoding, and finally, the GRID dataset itself appears limited with the grammar and use of a n-gram dictionary. Clearly the system is well engineered and final results impress, though it*s unclear how much broader insight the results yield.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLipNet: End-to-End Sentence-level Lipreading\nLipreading is the task of decoding text from the movement of a speaker*s mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman, 2016a). However, existing work on models trained end-to-end perform only word classification, rather than sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 95.2% accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4% word-level state-of-the-art accuracy (Gergen et al., 2016).", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present a well thought out and constructed system for performing lipreading. The primary novelty is the end-to-end nature of the system for lipreading, with the sentence-level prediction also differentiating this with prior work. The described neural network architecture contains convolutional and recurrent layers with a CTC sequence loss at the end, and beam search decoding with an LM is done to obtain best results. Performance is evaluated on the GRID dataset, with some saliency map and confusion matrix analysis provided as well. Overall, the work seems of high quality and clearly written with detailed explanations. The final results and analysis appear good as well. One gripe is that that the novelty lies in the choice of application domain as opposed to the methods. Lack of word-level comparisons also makes it difficult to determine the importance of using sentence-level information vs. choices in model architecture/decoding, and finally, the GRID dataset itself appears limited with the grammar and use of a n-gram dictionary. Clearly the system is well engineered and final results impress, though it*s unclear how much broader insight the results yield.", "role": "assistant" } ]
[ "The authors present a well thought out and constructed system for performing lipreading.", "The primary novelty is the end-to-end nature of the system for lipreading, with the sentence-level prediction also differentiating this with prior work.", "The described neural network architecture contains convolutional and recurrent layers with a CTC sequence loss at the end, and beam search decoding with an LM is done to obtain best results.", "Performance is evaluated on the GRID dataset, with some saliency map and confusion matrix analysis provided as well.", "Overall, the work seems of high quality and clearly written with detailed explanations.", "The final results and analysis appear good as well.", "One gripe is that that the novelty lies in the choice of application domain as opposed to the methods.", "Lack of word-level comparisons also makes it difficult to determine the importance of using sentence-level information vs. choices in model architecture/decoding, and finally, the GRID dataset itself appears limited with the grammar and use of a n-gram dictionary.", "Clearly the system is well engineered and final results impress, though it*s unclear how much broader insight the results yield." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 8, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 3, "suggestion_and_solution": 0, "total": 9 }
0.222222
0
0.111111
0.888889
0.444444
0.222222
0.333333
0
9
1
0
1
1
1
1
1
0
2.222222
1.969703
0.252519
A Simple yet Effective Method to Prune Dense Layers of Neural Networks
Neural networks are usually over-parameterized with significant redundancy in the number of required neurons which results in unnecessary computation and memory usage at inference time. One common approach to address this issue is to prune these big networks by removing extra neurons and parameters while maintaining the accuracy. In this paper, we propose NoiseOut, a fully automated pruning algorithm based on the correlation between activations of neurons in the hidden layers. We prove that adding additional output neurons with entirely random targets results into a higher correlation between neurons which makes pruning by NoiseOut even more efficient. Finally, we test our method on various networks and datasets. These experiments exhibit high pruning rates while maintaining the accuracy of the original network.
This paper proposes and tests two ideas. (1) a method of pruning networks by identifying highly correlated neuron pairs, pruning one of the pair, and then modifying downstream weights to compensate for the removal (which works well if the removed neurons were highly correlated). (2) a method, dubbed NoiseOut, for increasing neuron correlation by adding auxiliary noise target outputs to the network during training. The first idea (1) is fairly straightforward, and it is not clear if it has been tried before. It does seem to work. The second idea (2) is of unclear value and seems to this reviewer that it may merely add a regularizing effect. Comments in this direction: - In Fig 4 (right), the constant and Gaussian treatments seem to produce the same effect in both networks, right? And the Binomial effect seems the same as No_Noise. If this is true, can we conclude that the NoiseOut targets are simply serving to regularize the network, that is, to reduce its capacity slightly? - To show whether this effect is true, one would need to compare to other methods of reducing the network capacity, for example: by reducing the number of neurons, by applying L2 regularization of various values, or by applying Dropout of various strengths. Fig 7 makes an attempt at this direction, but critically misses several comparison treatments: “Pruned without any regularization”, “Pruned with only L2”, and “Pruned with only DropOut”. Have these experiments been run? Can their results be included and used to produce plots like Fig 5 and Fig 7? Without these comparisons, it seems impossible to conclude that NoiseOut does anything but provide similar regularization to DropOut or L2. The combined ideas (1) + (2) DO produce a considerable reduction in parameters, but sadly the experiments and exposition are somewhat too lacking to really understand what is going on. With a little more work the paper could be quite interesting, but as is it should probably not be accepted. Additional comments: - Section 4 states: “In all of these experiments, the only stop criteria is the accuracy decay of the model. We set the threshold for this criteria to match the original accuracy; therefore all the compressed network have the same accuracy as the original network.” Is this accuracy the train accuracy or test accuracy? If train, then test accuracy needs to be shown (how much test performance is lost when pruning?). If test, then this would typically be referred to as “cheating” and so the choice needs to be very clearly stated and then defended. - Lowercase rho is used to indicate correlation but this is never actually specified, which is confusing for. Just state once that it indicates correlation. - How do these results compare to other pruning methods? No numerical comparison is attempted.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Simple yet Effective Method to Prune Dense Layers of Neural Networks\nNeural networks are usually over-parameterized with significant redundancy in the number of required neurons which results in unnecessary computation and memory usage at inference time. One common approach to address this issue is to prune these big networks by removing extra neurons and parameters while maintaining the accuracy. In this paper, we propose NoiseOut, a fully automated pruning algorithm based on the correlation between activations of neurons in the hidden layers. We prove that adding additional output neurons with entirely random targets results into a higher correlation between neurons which makes pruning by NoiseOut even more efficient. Finally, we test our method on various networks and datasets. These experiments exhibit high pruning rates while maintaining the accuracy of the original network.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes and tests two ideas. (1) a method of pruning networks by identifying highly correlated neuron pairs, pruning one of the pair, and then modifying downstream weights to compensate for the removal (which works well if the removed neurons were highly correlated). (2) a method, dubbed NoiseOut, for increasing neuron correlation by adding auxiliary noise target outputs to the network during training. The first idea (1) is fairly straightforward, and it is not clear if it has been tried before. It does seem to work. The second idea (2) is of unclear value and seems to this reviewer that it may merely add a regularizing effect. Comments in this direction: - In Fig 4 (right), the constant and Gaussian treatments seem to produce the same effect in both networks, right? And the Binomial effect seems the same as No_Noise. If this is true, can we conclude that the NoiseOut targets are simply serving to regularize the network, that is, to reduce its capacity slightly? - To show whether this effect is true, one would need to compare to other methods of reducing the network capacity, for example: by reducing the number of neurons, by applying L2 regularization of various values, or by applying Dropout of various strengths. Fig 7 makes an attempt at this direction, but critically misses several comparison treatments: “Pruned without any regularization”, “Pruned with only L2”, and “Pruned with only DropOut”. Have these experiments been run? Can their results be included and used to produce plots like Fig 5 and Fig 7? Without these comparisons, it seems impossible to conclude that NoiseOut does anything but provide similar regularization to DropOut or L2. The combined ideas (1) + (2) DO produce a considerable reduction in parameters, but sadly the experiments and exposition are somewhat too lacking to really understand what is going on. With a little more work the paper could be quite interesting, but as is it should probably not be accepted. Additional comments: - Section 4 states: “In all of these experiments, the only stop criteria is the accuracy decay of the model. We set the threshold for this criteria to match the original accuracy; therefore all the compressed network have the same accuracy as the original network.” Is this accuracy the train accuracy or test accuracy? If train, then test accuracy needs to be shown (how much test performance is lost when pruning?). If test, then this would typically be referred to as “cheating” and so the choice needs to be very clearly stated and then defended. - Lowercase rho is used to indicate correlation but this is never actually specified, which is confusing for. Just state once that it indicates correlation. - How do these results compare to other pruning methods? No numerical comparison is attempted.", "role": "assistant" } ]
[ "This paper proposes and tests two ideas.", "(1) a method of pruning networks by identifying highly correlated neuron pairs, pruning one of the pair, and then modifying downstream weights to compensate for the removal (which works well if the removed neurons were highly correlated).", "(2) a method, dubbed NoiseOut, for increasing neuron correlation by adding auxiliary noise target outputs to the network during training.", "The first idea (1) is fairly straightforward, and it is not clear if it has been tried before.", "It does seem to work.", "The second idea (2) is of unclear value and seems to this reviewer that it may merely add a regularizing effect.", "Comments in this direction: - In Fig 4 (right), the constant and Gaussian treatments seem to produce the same effect in both networks, right?", "And the Binomial effect seems the same as No_Noise.", "If this is true, can we conclude that the NoiseOut targets are simply serving to regularize the network, that is, to reduce its capacity slightly?", "- To show whether this effect is true, one would need to compare to other methods of reducing the network capacity, for example: by reducing the number of neurons, by applying L2 regularization of various values, or by applying Dropout of various strengths.", "Fig 7 makes an attempt at this direction, but critically misses several comparison treatments: “Pruned without any regularization”, “Pruned with only L2”, and “Pruned with only DropOut”.", "Have these experiments been run?", "Can their results be included and used to produce plots like Fig 5 and Fig 7?", "Without these comparisons, it seems impossible to conclude that NoiseOut does anything but provide similar regularization to DropOut or L2.", "The combined ideas (1) + (2) DO produce a considerable reduction in parameters, but sadly the experiments and exposition are somewhat too lacking to really understand what is going on.", "With a little more work the paper could be quite interesting, but as is it should probably not be accepted.", "Additional comments: - Section 4 states: “In all of these experiments, the only stop criteria is the accuracy decay of the model.", "We set the threshold for this criteria to match the original accuracy; therefore all the compressed network have the same accuracy as the original network.” Is this accuracy the train accuracy or test accuracy?", "If train, then test accuracy needs to be shown (how much test performance is lost when pruning?).", "If test, then this would typically be referred to as “cheating” and so the choice needs to be very clearly stated and then defended.", "- Lowercase rho is used to indicate correlation but this is never actually specified, which is confusing for.", "Just state once that it indicates correlation.", "- How do these results compare to other pruning methods?", "No numerical comparison is attempted." ]
{ "criticism": 7, "example": 5, "importance_and_relevance": 1, "materials_and_methods": 14, "praise": 1, "presentation_and_reporting": 5, "results_and_discussion": 12, "suggestion_and_solution": 7, "total": 24 }
0.291667
0.208333
0.041667
0.583333
0.041667
0.208333
0.5
0.291667
24
1
1
1
1
1
1
1
1
2.166667
0.256993
1.909674
Emergent Predication Structure in Vector Representations of Neural Readers
Reading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a “predicate vector” P and a “constant symbol vector” c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.
The paper aims to consolidate some recent literature in simple types of *reading comprehension* tasks involving matching questions to answers to be found in a passage, and then to explore the types of structure learned by these models and propose modifications. These reading comprehension datasets such as CNN/Daily Mail are on the simpler side because they do not generally involve chains of reasoning over multiple pieces of supporting evidence as can be found in datasets like MCTest. Many models have been proposed for this task, and the paper breaks down these models into *aggregation readers* and *explicit reference readers.* The authors show that the aggregation readers organize their hidden states into a predicate structure which allows them to mimic the explicit reference readers. The authors then experiment with adding linguistic features, including reference features, to the existing models to improve performance. I appreciate the re-naming and re-writing of the paper to make it more clear that the aggregation readers are specifically learning a predicate structure, as well as the inclusion of results about dimensionality of the symbol space. Further, I think the effort to organize and categorize several different reading comprehension models into broader classes is useful, as the field has been producing many such models and the landscape is unclear. The concerns with this paper are that the predicate structure demonstrated is fairly simple, and it is not clear that it provides insight towards the development of better models in the future, since the *explicit reference readers* need not learn it, and the CNN/Daily Mail dataset has very little headroom left as demonstrated by Chen et al. 2016. The desire for *dramatic improvements in performance* mentioned in the discussion section probably cannot be achieved on these datasets. More complex datasets would probably involve multi-hop inference which this paper does not discuss. Further, the message of the paper is a bit scattered and hard to parse, and could benefit from a bit more focus. I think that with the explosion of various competing neural network models for NLP tasks, contributions like this one which attempt to organize and analyze the landscape are valuable, but that this paper might be better suited for an NLP conference or journal such as TACL.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEmergent Predication Structure in Vector Representations of Neural Readers\nReading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a “predicate vector” P and a “constant symbol vector” c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper aims to consolidate some recent literature in simple types of *reading comprehension* tasks involving matching questions to answers to be found in a passage, and then to explore the types of structure learned by these models and propose modifications. These reading comprehension datasets such as CNN/Daily Mail are on the simpler side because they do not generally involve chains of reasoning over multiple pieces of supporting evidence as can be found in datasets like MCTest. Many models have been proposed for this task, and the paper breaks down these models into *aggregation readers* and *explicit reference readers.* The authors show that the aggregation readers organize their hidden states into a predicate structure which allows them to mimic the explicit reference readers. The authors then experiment with adding linguistic features, including reference features, to the existing models to improve performance. I appreciate the re-naming and re-writing of the paper to make it more clear that the aggregation readers are specifically learning a predicate structure, as well as the inclusion of results about dimensionality of the symbol space. Further, I think the effort to organize and categorize several different reading comprehension models into broader classes is useful, as the field has been producing many such models and the landscape is unclear. The concerns with this paper are that the predicate structure demonstrated is fairly simple, and it is not clear that it provides insight towards the development of better models in the future, since the *explicit reference readers* need not learn it, and the CNN/Daily Mail dataset has very little headroom left as demonstrated by Chen et al. 2016. The desire for *dramatic improvements in performance* mentioned in the discussion section probably cannot be achieved on these datasets. More complex datasets would probably involve multi-hop inference which this paper does not discuss. Further, the message of the paper is a bit scattered and hard to parse, and could benefit from a bit more focus. I think that with the explosion of various competing neural network models for NLP tasks, contributions like this one which attempt to organize and analyze the landscape are valuable, but that this paper might be better suited for an NLP conference or journal such as TACL.", "role": "assistant" } ]
[ "The paper aims to consolidate some recent literature in simple types of *reading comprehension* tasks involving matching questions to answers to be found in a passage, and then to explore the types of structure learned by these models and propose modifications.", "These reading comprehension datasets such as CNN/Daily Mail are on the simpler side because they do not generally involve chains of reasoning over multiple pieces of supporting evidence as can be found in datasets like MCTest.", "Many models have been proposed for this task, and the paper breaks down these models into *aggregation readers* and *explicit reference readers.", "* The authors show that the aggregation readers organize their hidden states into a predicate structure which allows them to mimic the explicit reference readers.", "The authors then experiment with adding linguistic features, including reference features, to the existing models to improve performance.", "I appreciate the re-naming and re-writing of the paper to make it more clear that the aggregation readers are specifically learning a predicate structure, as well as the inclusion of results about dimensionality of the symbol space.", "Further, I think the effort to organize and categorize several different reading comprehension models into broader classes is useful, as the field has been producing many such models and the landscape is unclear.", "The concerns with this paper are that the predicate structure demonstrated is fairly simple, and it is not clear that it provides insight towards the development of better models in the future, since the *explicit reference readers* need not learn it, and the CNN/Daily Mail dataset has very little headroom left as demonstrated by Chen et al.", "2016.", "The desire for *dramatic improvements in performance* mentioned in the discussion section probably cannot be achieved on these datasets.", "More complex datasets would probably involve multi-hop inference which this paper does not discuss.", "Further, the message of the paper is a bit scattered and hard to parse, and could benefit from a bit more focus.", "I think that with the explosion of various competing neural network models for NLP tasks, contributions like this one which attempt to organize and analyze the landscape are valuable, but that this paper might be better suited for an NLP conference or journal such as TACL." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 9, "praise": 3, "presentation_and_reporting": 4, "results_and_discussion": 3, "suggestion_and_solution": 5, "total": 13 }
0.153846
0
0.230769
0.692308
0.230769
0.307692
0.230769
0.384615
13
1
0
1
1
1
1
1
1
2.230769
2.230769
0
Emergent Predication Structure in Vector Representations of Neural Readers
Reading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a “predicate vector” P and a “constant symbol vector” c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.
This paper aims to provide an insightful and analytic survey over the recent literature on reading comprehension with the distinct goal of investigating whether logical structure (or predication, as the authors rephrased in their response) arises in many of the recent models. I really like the spirit of the paper and appreciate the efforts to organize rather chaotic recent literature into two unified themes: *aggregation readers* and *explicit reference models”. Overall the quality of writing is great and section 3 was especially nice to read. I’m also happy with the proposed rewording from *logical structure* to “predication*, and the clarification by the authors was detailed and helpful. I think I still have slight mixed feelings about the contribution of the work. First, I wonder whether the choice of the dataset was ideal in the first place to accomplish the desired goal of the paper. There have been concerns about CNN/DailyMail dataset (Chen et al. ACL’16) and it is not clear to me whether the dataset supports investigation on logical structure of interesting kinds. Maybe it is bound to be rather about lack of logical structure. Second, I wish the discussion on predication sheds more practical insights into dataset design or model design to better tackle reading comprehension challenges. In that sense, it may have been more helpful if the authors could make more precise analysis on different types of reading comprehension challenges, what types of logical structure are lacking in various existing models and datasets, and point to specific directions where the community needs to focus more.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEmergent Predication Structure in Vector Representations of Neural Readers\nReading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a “predicate vector” P and a “constant symbol vector” c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper aims to provide an insightful and analytic survey over the recent literature on reading comprehension with the distinct goal of investigating whether logical structure (or predication, as the authors rephrased in their response) arises in many of the recent models. I really like the spirit of the paper and appreciate the efforts to organize rather chaotic recent literature into two unified themes: *aggregation readers* and *explicit reference models”. Overall the quality of writing is great and section 3 was especially nice to read. I’m also happy with the proposed rewording from *logical structure* to “predication*, and the clarification by the authors was detailed and helpful. I think I still have slight mixed feelings about the contribution of the work. First, I wonder whether the choice of the dataset was ideal in the first place to accomplish the desired goal of the paper. There have been concerns about CNN/DailyMail dataset (Chen et al. ACL’16) and it is not clear to me whether the dataset supports investigation on logical structure of interesting kinds. Maybe it is bound to be rather about lack of logical structure. Second, I wish the discussion on predication sheds more practical insights into dataset design or model design to better tackle reading comprehension challenges. In that sense, it may have been more helpful if the authors could make more precise analysis on different types of reading comprehension challenges, what types of logical structure are lacking in various existing models and datasets, and point to specific directions where the community needs to focus more.", "role": "assistant" } ]
[ "This paper aims to provide an insightful and analytic survey over the recent literature on reading comprehension with the distinct goal of investigating whether logical structure (or predication, as the authors rephrased in their response) arises in many of the recent models.", "I really like the spirit of the paper and appreciate the efforts to organize rather chaotic recent literature into two unified themes: *aggregation readers* and *explicit reference models”.", "Overall the quality of writing is great and section 3 was especially nice to read.", "I’m also happy with the proposed rewording from *logical structure* to “predication*, and the clarification by the authors was detailed and helpful.", "I think I still have slight mixed feelings about the contribution of the work.", "First, I wonder whether the choice of the dataset was ideal in the first place to accomplish the desired goal of the paper.", "There have been concerns about CNN/DailyMail dataset (Chen et al.", "ACL’16) and it is not clear to me whether the dataset supports investigation on logical structure of interesting kinds.", "Maybe it is bound to be rather about lack of logical structure.", "Second, I wish the discussion on predication sheds more practical insights into dataset design or model design to better tackle reading comprehension challenges.", "In that sense, it may have been more helpful if the authors could make more precise analysis on different types of reading comprehension challenges, what types of logical structure are lacking in various existing models and datasets, and point to specific directions where the community needs to focus more." ]
{ "criticism": 3, "example": 1, "importance_and_relevance": 5, "materials_and_methods": 7, "praise": 4, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 11 }
0.272727
0.090909
0.454545
0.636364
0.363636
0.272727
0.090909
0.181818
11
1
1
1
1
1
1
1
1
2.363636
2.300507
0.06313
Riemannian Optimization for Skip-Gram Negative Sampling
Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in *word2vec* software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.
Dear authors, The authors* response clarified some of my confusion. But I still have the following question: -- The response said a first contribution is a different formulation: you divide the word embedding learning into two steps, step 1 looks for a low-rank X (by Riemannian optimization), step 2 factorizes X into two matrices (W, C). You are claiming that your model outperforms previous approaches that directly optimizes over (W, C). But since the end result (the factors) is the same, can the authors provide some intuition and justification why the proposed method works better? As far as I can see, though parameterized differently, the first step of your method and previous methods (SGD) are both optimizing over low-rank matrices. Admittedly, Riemannian optimization avoids the rotational degree of freedom (the invertible matrix S you are mentioning in sec 2.3), but I am not 100% certain at this point this is the source of your gain; learning curves of objectives would help to see if Riemannian optimization is indeed more effective. -- Another detail I could not easily find is the following. You said a disadvantage of other approaches is that their factors W and C do not directly reflect similarity. Did you try to multiply the factors W and C from other optimizers and then factorize the product using the method in section 2.3, and use the new W for your downstream tasks? I am not sure if this would cause much difference in the performance. Overall, I think it is always interesting to apply advanced optimization techniques to machine learning problems. The current paper would be stronger from the machine learning perspective, if more thorough comparison and discussion (as mentioned above) are provided. On the other hand, my expertise is not in NLP and I leave it to other reviewers to decide the significance in experimental results.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRiemannian Optimization for Skip-Gram Negative Sampling\nSkip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in *word2vec* software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nDear authors, The authors* response clarified some of my confusion. But I still have the following question: -- The response said a first contribution is a different formulation: you divide the word embedding learning into two steps, step 1 looks for a low-rank X (by Riemannian optimization), step 2 factorizes X into two matrices (W, C). You are claiming that your model outperforms previous approaches that directly optimizes over (W, C). But since the end result (the factors) is the same, can the authors provide some intuition and justification why the proposed method works better? As far as I can see, though parameterized differently, the first step of your method and previous methods (SGD) are both optimizing over low-rank matrices. Admittedly, Riemannian optimization avoids the rotational degree of freedom (the invertible matrix S you are mentioning in sec 2.3), but I am not 100% certain at this point this is the source of your gain; learning curves of objectives would help to see if Riemannian optimization is indeed more effective. -- Another detail I could not easily find is the following. You said a disadvantage of other approaches is that their factors W and C do not directly reflect similarity. Did you try to multiply the factors W and C from other optimizers and then factorize the product using the method in section 2.3, and use the new W for your downstream tasks? I am not sure if this would cause much difference in the performance. Overall, I think it is always interesting to apply advanced optimization techniques to machine learning problems. The current paper would be stronger from the machine learning perspective, if more thorough comparison and discussion (as mentioned above) are provided. On the other hand, my expertise is not in NLP and I leave it to other reviewers to decide the significance in experimental results.", "role": "assistant" } ]
[ "Dear authors, The authors* response clarified some of my confusion.", "But I still have the following question: -- The response said a first contribution is a different formulation: you divide the word embedding learning into two steps, step 1 looks for a low-rank X (by Riemannian optimization), step 2 factorizes X into two matrices (W, C).", "You are claiming that your model outperforms previous approaches that directly optimizes over (W, C).", "But since the end result (the factors) is the same, can the authors provide some intuition and justification why the proposed method works better?", "As far as I can see, though parameterized differently, the first step of your method and previous methods (SGD) are both optimizing over low-rank matrices.", "Admittedly, Riemannian optimization avoids the rotational degree of freedom (the invertible matrix S you are mentioning in sec 2.3), but I am not 100% certain at this point this is the source of your gain; learning curves of objectives would help to see if Riemannian optimization is indeed more effective.", "-- Another detail I could not easily find is the following.", "You said a disadvantage of other approaches is that their factors W and C do not directly reflect similarity.", "Did you try to multiply the factors W and C from other optimizers and then factorize the product using the method in section 2.3, and use the new W for your downstream tasks?", "I am not sure if this would cause much difference in the performance.", "Overall, I think it is always interesting to apply advanced optimization techniques to machine learning problems.", "The current paper would be stronger from the machine learning perspective, if more thorough comparison and discussion (as mentioned above) are provided.", "On the other hand, my expertise is not in NLP and I leave it to other reviewers to decide the significance in experimental results." ]
{ "criticism": 5, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 9, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 7, "suggestion_and_solution": 2, "total": 13 }
0.384615
0
0.153846
0.692308
0.076923
0.153846
0.538462
0.153846
13
1
0
1
1
1
1
1
1
2.153846
2.153846
0
Riemannian Optimization for Skip-Gram Negative Sampling
Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in *word2vec* software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.
This paper presents a principled optimization method for SGNS (word2vec). While the proposed method is elegant from a theoretical perspective, I am not sure what the tangible benefits of this approach are. For example, does using Riemannian optimization allow the model to converge faster than the alternatives? The evaluation doesn*t show a dramatic advantage to RO-SGNS; the 1% difference on the word similarity benchmarks is within the range of hyperparameter effects (see *Improving Distributional Similarity with Lessons Learned from Word Embeddings*, (Levy et al., 2015)). The theoretical connection to Riemannian optimization is nice though, and it might be useful for understanding related methods in the future.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRiemannian Optimization for Skip-Gram Negative Sampling\nSkip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in *word2vec* software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a principled optimization method for SGNS (word2vec). While the proposed method is elegant from a theoretical perspective, I am not sure what the tangible benefits of this approach are. For example, does using Riemannian optimization allow the model to converge faster than the alternatives? The evaluation doesn*t show a dramatic advantage to RO-SGNS; the 1% difference on the word similarity benchmarks is within the range of hyperparameter effects (see *Improving Distributional Similarity with Lessons Learned from Word Embeddings*, (Levy et al., 2015)). The theoretical connection to Riemannian optimization is nice though, and it might be useful for understanding related methods in the future.", "role": "assistant" } ]
[ "This paper presents a principled optimization method for SGNS (word2vec).", "While the proposed method is elegant from a theoretical perspective, I am not sure what the tangible benefits of this approach are.", "For example, does using Riemannian optimization allow the model to converge faster than the alternatives?", "The evaluation doesn*t show a dramatic advantage to RO-SGNS; the 1% difference on the word similarity benchmarks is within the range of hyperparameter effects (see *Improving Distributional Similarity with Lessons Learned from Word Embeddings*, (Levy et al., 2015)).", "The theoretical connection to Riemannian optimization is nice though, and it might be useful for understanding related methods in the future." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 5 }
0.4
0
0.4
1
0.4
0.2
0.2
0.2
5
1
0
1
1
1
1
1
1
2.8
1.789924
1.010076
RL^2: Fast Reinforcement Learning via Slow Reinforcement Learning
Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a “fast” reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL^2, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (“slow”) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the “fast” RL algorithm on the current (previously unseen) MDP. We evaluate RL^2 experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-arm bandit problems and finite MDPs. After RL^2 is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL^2 on a vision-based navigation task and show that it scales up to high-dimensional problems.
The authors try to address the issue of data efficiency in deep reinforcement learning by meta-learning a reinforcement learning algorithm using a hand-designed reinforcement learning algorithm (TRPO in this case). The experiments suggest comparable performance to models with prior knowledge of the distribution over environments for bandit tasks, and experiments on random maze navigation from vision is shown as well, though the random maze experiments would benefit from a clearer explanation. It was not obvious from the text how their experiments supported the thesis of the paper that the learned RL algorithm was effectively performing one-shot learning. The subject of the paper is also strikingly similar to the recently-posted paper Learning to Reinforcement Learn (https://arxiv.org/pdf/1611.05763.pdf), and while this paper was posted after the ICLR deadline, the authors should probably update the text to reflect the state of this rapidly-advancing field.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRL^2: Fast Reinforcement Learning via Slow Reinforcement Learning\nDeep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a “fast” reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL^2, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (“slow”) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the “fast” RL algorithm on the current (previously unseen) MDP. We evaluate RL^2 experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-arm bandit problems and finite MDPs. After RL^2 is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL^2 on a vision-based navigation task and show that it scales up to high-dimensional problems.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors try to address the issue of data efficiency in deep reinforcement learning by meta-learning a reinforcement learning algorithm using a hand-designed reinforcement learning algorithm (TRPO in this case). The experiments suggest comparable performance to models with prior knowledge of the distribution over environments for bandit tasks, and experiments on random maze navigation from vision is shown as well, though the random maze experiments would benefit from a clearer explanation. It was not obvious from the text how their experiments supported the thesis of the paper that the learned RL algorithm was effectively performing one-shot learning. The subject of the paper is also strikingly similar to the recently-posted paper Learning to Reinforcement Learn (https://arxiv.org/pdf/1611.05763.pdf), and while this paper was posted after the ICLR deadline, the authors should probably update the text to reflect the state of this rapidly-advancing field.", "role": "assistant" } ]
[ "The authors try to address the issue of data efficiency in deep reinforcement learning by meta-learning a reinforcement learning algorithm using a hand-designed reinforcement learning algorithm (TRPO in this case).", "The experiments suggest comparable performance to models with prior knowledge of the distribution over environments for bandit tasks, and experiments on random maze navigation from vision is shown as well, though the random maze experiments would benefit from a clearer explanation.", "It was not obvious from the text how their experiments supported the thesis of the paper that the learned RL algorithm was effectively performing one-shot learning.", "The subject of the paper is also strikingly similar to the recently-posted paper Learning to Reinforcement Learn (https://arxiv.org/pdf/1611.05763.pdf), and while this paper was posted after the ICLR deadline, the authors should probably update the text to reflect the state of this rapidly-advancing field." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 3, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 4 }
0.25
0
0
0.75
0.25
0.25
0.25
0.5
4
1
0
0
1
1
1
1
1
2.25
0.971623
1.278377
Beyond Fine Tuning: A Modular Approach to Learning on Small Data
In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.
This paper proposes a method of augmenting pre-trained networks for one task with an additional inference path specific to an additional task, as a replacement for the standard “fine-tuning” approach. Pros: -The method is simple and clearly explained. -Standard fine-tuning is used widely, so improvements to and analysis of it should be of general interest. -Experiments are performed in multiple domains -- vision and NLP. Cons: -The additional modules incur a rather large cost, resulting in 2x the parameters and roughly 3x the computation of the original network (for the “stiched” network). These costs are not addressed in the paper text, and make the method significantly less practical for real-world use where performance is very often important. -Given these large additional costs, the core of the idea is not sufficiently validated, to me. In order to verify that the improved performance is actually coming from some unique aspects of the proposed technique, rather than simply the fact that a higher-capacity network is being used, some additional baselines are needed: (1) Allowing the original network weights to be learned for the target task, as well as the additional module. Outperforming this baseline on the validation set would verify that freezing the original weights provides an interesting form of regularization for the network. (2) Training the full module/stitched network from scratch on the *source* task, then fine-tuning it for the target task. Outperforming this baseline would verify that having a set of weights which never “sees” the source dataset is useful. -The method is not evaluated on ImageNet, which is far and away the most common domain in which pre-trained networks are used and fine-tuned for other tasks. I’ve never seen networks pre-trained on CIFAR deployed anywhere, and it’s hard to know whether the method will be practically useful for computer vision applications based on CIFAR results -- often improved performance on CIFAR does not translate to ImageNet. (In other contexts, such as more theoretical contributions, having results only on small datasets is acceptable to me, but network fine-tuning is far enough on the “practical” end of the spectrum that claiming an improvement to it should necessitate an ImageNet evaluation.) Overall I think the proposed idea is interesting and potentially promising, but in its current form is not sufficiently evaluated to convince me that the performance boosts don’t simply come from the use of a larger network, and the lack of ImageNet evaluation calls into question its real-world application. =============== Edit (1/23/17): I had indeed missed the fact that the Stanford Cars does do transfer learning from ImageNet -- thanks for the correction. However, the experiment in this case is only showing late fusion ensembling, which is a conventional approach compared with the *stitched network* idea which is the real novelty of the paper. Furthermore the results in this case are particularly weak, showing only that an ensemble of ResNet+VGG outperforms VGG alone, which is completely expected given that ResNet alone is a stronger base network than VGG (*ResNet+VGG > ResNet* would be a stronger result, but still not surprising). Demonstrating the stitched network idea on ImageNet, comparing with the corresponding VGG-only or ResNet-only finetuning, could be enough to push this paper over the bar for me, but the current version of the experiments here don*t sufficiently validate the stitched network idea, in my opinion.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBeyond Fine Tuning: A Modular Approach to Learning on Small Data\nIn this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a method of augmenting pre-trained networks for one task with an additional inference path specific to an additional task, as a replacement for the standard “fine-tuning” approach. Pros: -The method is simple and clearly explained. -Standard fine-tuning is used widely, so improvements to and analysis of it should be of general interest. -Experiments are performed in multiple domains -- vision and NLP. Cons: -The additional modules incur a rather large cost, resulting in 2x the parameters and roughly 3x the computation of the original network (for the “stiched” network). These costs are not addressed in the paper text, and make the method significantly less practical for real-world use where performance is very often important. -Given these large additional costs, the core of the idea is not sufficiently validated, to me. In order to verify that the improved performance is actually coming from some unique aspects of the proposed technique, rather than simply the fact that a higher-capacity network is being used, some additional baselines are needed: (1) Allowing the original network weights to be learned for the target task, as well as the additional module. Outperforming this baseline on the validation set would verify that freezing the original weights provides an interesting form of regularization for the network. (2) Training the full module/stitched network from scratch on the *source* task, then fine-tuning it for the target task. Outperforming this baseline would verify that having a set of weights which never “sees” the source dataset is useful. -The method is not evaluated on ImageNet, which is far and away the most common domain in which pre-trained networks are used and fine-tuned for other tasks. I’ve never seen networks pre-trained on CIFAR deployed anywhere, and it’s hard to know whether the method will be practically useful for computer vision applications based on CIFAR results -- often improved performance on CIFAR does not translate to ImageNet. (In other contexts, such as more theoretical contributions, having results only on small datasets is acceptable to me, but network fine-tuning is far enough on the “practical” end of the spectrum that claiming an improvement to it should necessitate an ImageNet evaluation.) Overall I think the proposed idea is interesting and potentially promising, but in its current form is not sufficiently evaluated to convince me that the performance boosts don’t simply come from the use of a larger network, and the lack of ImageNet evaluation calls into question its real-world application. =============== Edit (1/23/17): I had indeed missed the fact that the Stanford Cars does do transfer learning from ImageNet -- thanks for the correction. However, the experiment in this case is only showing late fusion ensembling, which is a conventional approach compared with the *stitched network* idea which is the real novelty of the paper. Furthermore the results in this case are particularly weak, showing only that an ensemble of ResNet+VGG outperforms VGG alone, which is completely expected given that ResNet alone is a stronger base network than VGG (*ResNet+VGG > ResNet* would be a stronger result, but still not surprising). Demonstrating the stitched network idea on ImageNet, comparing with the corresponding VGG-only or ResNet-only finetuning, could be enough to push this paper over the bar for me, but the current version of the experiments here don*t sufficiently validate the stitched network idea, in my opinion.", "role": "assistant" } ]
[ "This paper proposes a method of augmenting pre-trained networks for one task with an additional inference path specific to an additional task, as a replacement for the standard “fine-tuning” approach.", "Pros: -The method is simple and clearly explained.", "-Standard fine-tuning is used widely, so improvements to and analysis of it should be of general interest.", "-Experiments are performed in multiple domains -- vision and NLP.", "Cons: -The additional modules incur a rather large cost, resulting in 2x the parameters and roughly 3x the computation of the original network (for the “stiched” network).", "These costs are not addressed in the paper text, and make the method significantly less practical for real-world use where performance is very often important.", "-Given these large additional costs, the core of the idea is not sufficiently validated, to me.", "In order to verify that the improved performance is actually coming from some unique aspects of the proposed technique, rather than simply the fact that a higher-capacity network is being used, some additional baselines are needed: (1) Allowing the original network weights to be learned for the target task, as well as the additional module.", "Outperforming this baseline on the validation set would verify that freezing the original weights provides an interesting form of regularization for the network.", "(2) Training the full module/stitched network from scratch on the *source* task, then fine-tuning it for the target task.", "Outperforming this baseline would verify that having a set of weights which never “sees” the source dataset is useful.", "-The method is not evaluated on ImageNet, which is far and away the most common domain in which pre-trained networks are used and fine-tuned for other tasks.", "I’ve never seen networks pre-trained on CIFAR deployed anywhere, and it’s hard to know whether the method will be practically useful for computer vision applications based on CIFAR results -- often improved performance on CIFAR does not translate to ImageNet.", "(In other contexts, such as more theoretical contributions, having results only on small datasets is acceptable to me, but network fine-tuning is far enough on the “practical” end of the spectrum that claiming an improvement to it should necessitate an ImageNet evaluation.)", "Overall I think the proposed idea is interesting and potentially promising, but in its current form is not sufficiently evaluated to convince me that the performance boosts don’t simply come from the use of a larger network, and the lack of ImageNet evaluation calls into question its real-world application.", "=============== Edit (1/23/17): I had indeed missed the fact that the Stanford Cars does do transfer learning from ImageNet -- thanks for the correction.", "However, the experiment in this case is only showing late fusion ensembling, which is a conventional approach compared with the *stitched network* idea which is the real novelty of the paper.", "Furthermore the results in this case are particularly weak, showing only that an ensemble of ResNet+VGG outperforms VGG alone, which is completely expected given that ResNet alone is a stronger base network than VGG (*ResNet+VGG > ResNet* would be a stronger result, but still not surprising).", "Demonstrating the stitched network idea on ImageNet, comparing with the corresponding VGG-only or ResNet-only finetuning, could be enough to push this paper over the bar for me, but the current version of the experiments here don*t sufficiently validate the stitched network idea, in my opinion." ]
{ "criticism": 6, "example": 0, "importance_and_relevance": 6, "materials_and_methods": 17, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 4, "suggestion_and_solution": 5, "total": 19 }
0.315789
0
0.315789
0.894737
0.210526
0.105263
0.210526
0.263158
19
1
0
1
1
1
1
1
1
2.315789
1.747622
0.568167
Beyond Fine Tuning: A Modular Approach to Learning on Small Data
In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.
This paper proposed to perform finetuning in an augmentation fashion by freezing the original network and adding a new model aside it. The idea itself is interesting and complements existing training and finetuning approaches, although I think there are a few baseline approaches that can be compared against, such as: (1) Ensemble: in principle, the idea is similar to an ensembling approach where multiple networks are ensembled together to get a final prediction. The approach in Figure 1 should be compared with such ensemble baselines - taking multiple source domain predictors, possibly with the same modular setting as the proposed method, and compare the performance. (2) comparison with late fusion: if we combine the pretrained network and a network finetuned from the pretrained one, and do a late fusion? Basically, I think it is a valuable argument in section 3.2 (and Figure 4) that finetuning with a small amount of data may hurt the performance in general. This builds the ground for freezing a pretrained network and only augmenting it, not changing it. I agree with the authors on this argument, although currently other than Figure 4 there seem to be little empirical study that justifies it. It is worth noting that Figure 3 seems to suggest that some of the module filters are either not converging or are learning unuseful features - like the first two filters in 3(a). Overall I think it is an interesting idea and I would love to see it better developed, thus I am giving a weak accept recommendation, but with a low confidence as the experiments section is not very convincing.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBeyond Fine Tuning: A Modular Approach to Learning on Small Data\nIn this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposed to perform finetuning in an augmentation fashion by freezing the original network and adding a new model aside it. The idea itself is interesting and complements existing training and finetuning approaches, although I think there are a few baseline approaches that can be compared against, such as: (1) Ensemble: in principle, the idea is similar to an ensembling approach where multiple networks are ensembled together to get a final prediction. The approach in Figure 1 should be compared with such ensemble baselines - taking multiple source domain predictors, possibly with the same modular setting as the proposed method, and compare the performance. (2) comparison with late fusion: if we combine the pretrained network and a network finetuned from the pretrained one, and do a late fusion? Basically, I think it is a valuable argument in section 3.2 (and Figure 4) that finetuning with a small amount of data may hurt the performance in general. This builds the ground for freezing a pretrained network and only augmenting it, not changing it. I agree with the authors on this argument, although currently other than Figure 4 there seem to be little empirical study that justifies it. It is worth noting that Figure 3 seems to suggest that some of the module filters are either not converging or are learning unuseful features - like the first two filters in 3(a). Overall I think it is an interesting idea and I would love to see it better developed, thus I am giving a weak accept recommendation, but with a low confidence as the experiments section is not very convincing.", "role": "assistant" } ]
[ "This paper proposed to perform finetuning in an augmentation fashion by freezing the original network and adding a new model aside it.", "The idea itself is interesting and complements existing training and finetuning approaches, although I think there are a few baseline approaches that can be compared against, such as: (1) Ensemble: in principle, the idea is similar to an ensembling approach where multiple networks are ensembled together to get a final prediction.", "The approach in Figure 1 should be compared with such ensemble baselines - taking multiple source domain predictors, possibly with the same modular setting as the proposed method, and compare the performance.", "(2) comparison with late fusion: if we combine the pretrained network and a network finetuned from the pretrained one, and do a late fusion?", "Basically, I think it is a valuable argument in section 3.2 (and Figure 4) that finetuning with a small amount of data may hurt the performance in general.", "This builds the ground for freezing a pretrained network and only augmenting it, not changing it.", "I agree with the authors on this argument, although currently other than Figure 4 there seem to be little empirical study that justifies it.", "It is worth noting that Figure 3 seems to suggest that some of the module filters are either not converging or are learning unuseful features - like the first two filters in 3(a).", "Overall I think it is an interesting idea and I would love to see it better developed, thus I am giving a weak accept recommendation, but with a low confidence as the experiments section is not very convincing." ]
{ "criticism": 3, "example": 2, "importance_and_relevance": 4, "materials_and_methods": 7, "praise": 3, "presentation_and_reporting": 0, "results_and_discussion": 5, "suggestion_and_solution": 2, "total": 9 }
0.333333
0.222222
0.444444
0.777778
0.333333
0
0.555556
0.222222
9
1
1
1
1
1
0
1
1
2.888889
2.63637
0.252519
Learning Word-Like Units from Joint Audio-Visual Analylsis
Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words ``lighthouse** within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.
This paper is a follow-up on the NIPS 2016 paper *Unsupervised learning of spoken language with visual context*, and does exactly what that paper proposes in its future work section: *to perform acoustic segmentation and clustering, effectively learning a lexicon of word-like units* using the embeddings that their system learns. The analysis is very interesting and I really like where the authors are going with this. My main concern is novelty. It feels like this work is a rather trivial follow-up on an existing model, which is fine, but then the analysis should be more satisfying: currently, it feels like the authors are just illustrating some of the things that the NIPS model (with some minor improvements) learns. For a more interesting analysis, I would have liked things like a comparison of different segmentation approaches (both in audio and in images), i.e., suppose we have access to the perfect segmentation in both modalities, what happens? It would also be interesting to look at what is learned with the grounded representation, and evaluate e.g. on multi-modal semantics tasks. Apart from that, the paper is well written and I really like this research direction. It is very important to analyze what models learn, and this is a good example of the types of questions one should ask. I am afraid, however, that the model is not novel enough, nor the questions deep enough, to make this paper better than borderline for ICLR.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Word-Like Units from Joint Audio-Visual Analylsis\nGiven a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words ``lighthouse** within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper is a follow-up on the NIPS 2016 paper *Unsupervised learning of spoken language with visual context*, and does exactly what that paper proposes in its future work section: *to perform acoustic segmentation and clustering, effectively learning a lexicon of word-like units* using the embeddings that their system learns. The analysis is very interesting and I really like where the authors are going with this. My main concern is novelty. It feels like this work is a rather trivial follow-up on an existing model, which is fine, but then the analysis should be more satisfying: currently, it feels like the authors are just illustrating some of the things that the NIPS model (with some minor improvements) learns. For a more interesting analysis, I would have liked things like a comparison of different segmentation approaches (both in audio and in images), i.e., suppose we have access to the perfect segmentation in both modalities, what happens? It would also be interesting to look at what is learned with the grounded representation, and evaluate e.g. on multi-modal semantics tasks. Apart from that, the paper is well written and I really like this research direction. It is very important to analyze what models learn, and this is a good example of the types of questions one should ask. I am afraid, however, that the model is not novel enough, nor the questions deep enough, to make this paper better than borderline for ICLR.", "role": "assistant" } ]
[ "This paper is a follow-up on the NIPS 2016 paper *Unsupervised learning of spoken language with visual context*, and does exactly what that paper proposes in its future work section: *to perform acoustic segmentation and clustering, effectively learning a lexicon of word-like units* using the embeddings that their system learns.", "The analysis is very interesting and I really like where the authors are going with this.", "My main concern is novelty.", "It feels like this work is a rather trivial follow-up on an existing model, which is fine, but then the analysis should be more satisfying: currently, it feels like the authors are just illustrating some of the things that the NIPS model (with some minor improvements) learns.", "For a more interesting analysis, I would have liked things like a comparison of different segmentation approaches (both in audio and in images), i.e., suppose we have access to the perfect segmentation in both modalities, what happens?", "It would also be interesting to look at what is learned with the grounded representation, and evaluate e.g.", "on multi-modal semantics tasks.", "Apart from that, the paper is well written and I really like this research direction.", "It is very important to analyze what models learn, and this is a good example of the types of questions one should ask.", "I am afraid, however, that the model is not novel enough, nor the questions deep enough, to make this paper better than borderline for ICLR." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 5, "materials_and_methods": 6, "praise": 4, "presentation_and_reporting": 2, "results_and_discussion": 0, "suggestion_and_solution": 4, "total": 10 }
0.2
0
0.5
0.6
0.4
0.2
0
0.4
10
1
0
1
1
1
1
0
1
2.3
2.157958
0.142042
Learning Word-Like Units from Joint Audio-Visual Analylsis
Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words ``lighthouse** within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.
This work proposes a joint classification of images and audio captions for the task of word like discovery of acoustic units that correlate to semantically visual objects. The general this is a very interesting direction of research as it allows for a richer representation of data: regularizing visual signal with audio and visa versa. This allows for training of visual models from video, etc. A major concern is the amount of novelty between this work and the author*s previous publication at NIPs 2016. The authors claim a more sophisticated architecture and indeed show an improvement in recall. However, the improvements are marginal, and the added complexity to the architecture is a bit ad hoc. Clustering and grouping in section 4, is hacky. Instead of gridding the image, the authors could actually use an object detector (SSD, Yolo, FasterRCNN, etc.) to estimate accurate object proposals; rather than using k-means, a spectral clustering approach would alleviate the gaussian assumption of the distributions. In assigning visual hypotheses with acoustic segments, some form of bi-partite matching should be used. Overall, I really like this direction of research, and encourage the authors to continue developing algorithms that can train from such multimodal datasets. However, the work isn*t quite novel enough from NIPs 2016.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Word-Like Units from Joint Audio-Visual Analylsis\nGiven a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words ``lighthouse** within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work proposes a joint classification of images and audio captions for the task of word like discovery of acoustic units that correlate to semantically visual objects. The general this is a very interesting direction of research as it allows for a richer representation of data: regularizing visual signal with audio and visa versa. This allows for training of visual models from video, etc. A major concern is the amount of novelty between this work and the author*s previous publication at NIPs 2016. The authors claim a more sophisticated architecture and indeed show an improvement in recall. However, the improvements are marginal, and the added complexity to the architecture is a bit ad hoc. Clustering and grouping in section 4, is hacky. Instead of gridding the image, the authors could actually use an object detector (SSD, Yolo, FasterRCNN, etc.) to estimate accurate object proposals; rather than using k-means, a spectral clustering approach would alleviate the gaussian assumption of the distributions. In assigning visual hypotheses with acoustic segments, some form of bi-partite matching should be used. Overall, I really like this direction of research, and encourage the authors to continue developing algorithms that can train from such multimodal datasets. However, the work isn*t quite novel enough from NIPs 2016.", "role": "assistant" } ]
[ "This work proposes a joint classification of images and audio captions for the task of word like discovery of acoustic units that correlate to semantically visual objects.", "The general this is a very interesting direction of research as it allows for a richer representation of data: regularizing visual signal with audio and visa versa.", "This allows for training of visual models from video, etc.", "A major concern is the amount of novelty between this work and the author*s previous publication at NIPs 2016.", "The authors claim a more sophisticated architecture and indeed show an improvement in recall.", "However, the improvements are marginal, and the added complexity to the architecture is a bit ad hoc.", "Clustering and grouping in section 4, is hacky.", "Instead of gridding the image, the authors could actually use an object detector (SSD, Yolo, FasterRCNN, etc.)", "to estimate accurate object proposals; rather than using k-means, a spectral clustering approach would alleviate the gaussian assumption of the distributions.", "In assigning visual hypotheses with acoustic segments, some form of bi-partite matching should be used.", "Overall, I really like this direction of research, and encourage the authors to continue developing algorithms that can train from such multimodal datasets.", "However, the work isn*t quite novel enough from NIPs 2016." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 8, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 4, "total": 12 }
0.333333
0
0.25
0.666667
0.25
0.25
0.083333
0.333333
12
1
0
1
1
1
1
1
1
2.166667
2.150884
0.015782
Multi-modal Variational Encoder-Decoders
Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, uni-modal priors — such as the multivariate Gaussian distribution — yet many real-world data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.
The authors introduce some new prior and approximate posterior families for variational autoencoders, which are compatible with the reparameterization trick, as well as being capable of expressing multiple modes. They also introduce a gating mechanism between prior and posterior. They show improvements on bag of words document modeling, and dialogue response generation. The original abstract is overly strong in its assertion that a unimodal latent prior p(z) cannot fit a multimodal marginal int_z p(x|z)p(x)dz with a DNN response model p(x|z) (*it cannot possibly capture more complex aspects of the data distribution*, *critical restriction*, etc). While the assertion that a unimodal latent prior is necessary to model multimodal observations is false, there are sensible motivations for the piecewise constant prior and posterior. For example, if we think of a VAE as a sort of regularized autoencoder where codes are constrained to *fill up* parts of the prior latent space, then there is a sphere-packing argument to be made that filling a Gaussian prior with Gaussian posteriors is a bad use of code space. Although the authors don*t explore this much, a hypercube-based tiling of latent code space is a sensible idea. As stated, I found the message of the paper to be quite sloppy with respect to the concept of *multi-modality.* There are 3 types of multimodality at play here: multimodality in the observed marginal distribution p(x), which can be captured by any deep latent Gaussian model, multimodality in the prior p(z), which makes sense in some situations (e.g. a model of MNIST digits could have 10 prior modes corresponding to latent codes for each digit class), and multimodality in the posterior z for a given observation x_i, q(z_i|x_i). The final type of multimodality is harder to argue for, except in so far as it allows the expression of flexibly shaped distributions without highly separated modes. I believe flexible posterior approximations are important to enable fine-grained and efficient tiling of latent space, but I don*t think these need to have multiple strong modes. I would be interested to see experiments demonstrating otherwise for real world data. I think this paper should be more clear about the different types of multi-modality and which parts of their analysis demonstrate which ones. I also found it unsatisfactory that the piecewise variable analysis did not show different components of the multi-modal prior corresponding to different words, but rather just a separation between the Gaussian and the piecewise variables. As I mention in my earlier questions, I found it surprising that the learned variance and mean for the Gaussian prior helps so dramatically with G-NVDM likelihood when the powerful networks transforming to and from latent space should make it scale-invariant. Explicitly separating out the contributions of a reimplemented base model, prior-posterior interpolation and the learned prior parameters would strengthen these experiments. Overall, the very strong improvements on the text modeling task over NVDM seem hard to understand, and I would like to see an ablation analysis of all the differences between that model and the proposed one. The fact that adding more constant components helps for document modeling is interesting, and it would be nice to see more qualitative analysis of what the prior modes represent. I also would be surprised if posterior modes were highly separated, and if they were it would be interesting to explore if they corresponded to e.g. ambiguous word-senses. The experiments on dialog modeling are mostly negative results, quantitatively. The observation that the the piecewise constant variables encode time-related words and the Gaussian variables encode sentiment is interesting, especially since it occurs in both sets of experiments. This is actually quite interesting, and I would be interested in seeing analysis of why this is the case. As above, I would like to see an analysis of the sorts of words that are encoded in the different prior modes and whether they correspond to e.g. groups of similar holidays or days. In conclusion, I think the piecewise constant variational family is a good idea, although it is not well-motivated by the paper. The experimental results are very good for document modeling, but without ablation analysis against the baseline it is hard to see why they should be with such a small modification in G-NVDM. The fact that H-NVDM performs better is interesting, though. This paper should better motivate the need for different types of multi-modality, and demonstrate that those sorts of things are actually being captured by the model. As it is, the paper introduces an interesting variational family and shows that it performs better for some tasks, but the motivation and analysis is not clearly focused. To demonstrate that this is a broadly applicable family, it would also be good to do experiments on a more standard datasets like MNIST. Even without an absolute log-likelihood improvement, if the method yielded interpretable multiple modes this would be a valuable contribution.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMulti-modal Variational Encoder-Decoders\nRecent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, uni-modal priors — such as the multivariate Gaussian distribution — yet many real-world data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors introduce some new prior and approximate posterior families for variational autoencoders, which are compatible with the reparameterization trick, as well as being capable of expressing multiple modes. They also introduce a gating mechanism between prior and posterior. They show improvements on bag of words document modeling, and dialogue response generation. The original abstract is overly strong in its assertion that a unimodal latent prior p(z) cannot fit a multimodal marginal int_z p(x|z)p(x)dz with a DNN response model p(x|z) (*it cannot possibly capture more complex aspects of the data distribution*, *critical restriction*, etc). While the assertion that a unimodal latent prior is necessary to model multimodal observations is false, there are sensible motivations for the piecewise constant prior and posterior. For example, if we think of a VAE as a sort of regularized autoencoder where codes are constrained to *fill up* parts of the prior latent space, then there is a sphere-packing argument to be made that filling a Gaussian prior with Gaussian posteriors is a bad use of code space. Although the authors don*t explore this much, a hypercube-based tiling of latent code space is a sensible idea. As stated, I found the message of the paper to be quite sloppy with respect to the concept of *multi-modality.* There are 3 types of multimodality at play here: multimodality in the observed marginal distribution p(x), which can be captured by any deep latent Gaussian model, multimodality in the prior p(z), which makes sense in some situations (e.g. a model of MNIST digits could have 10 prior modes corresponding to latent codes for each digit class), and multimodality in the posterior z for a given observation x_i, q(z_i|x_i). The final type of multimodality is harder to argue for, except in so far as it allows the expression of flexibly shaped distributions without highly separated modes. I believe flexible posterior approximations are important to enable fine-grained and efficient tiling of latent space, but I don*t think these need to have multiple strong modes. I would be interested to see experiments demonstrating otherwise for real world data. I think this paper should be more clear about the different types of multi-modality and which parts of their analysis demonstrate which ones. I also found it unsatisfactory that the piecewise variable analysis did not show different components of the multi-modal prior corresponding to different words, but rather just a separation between the Gaussian and the piecewise variables. As I mention in my earlier questions, I found it surprising that the learned variance and mean for the Gaussian prior helps so dramatically with G-NVDM likelihood when the powerful networks transforming to and from latent space should make it scale-invariant. Explicitly separating out the contributions of a reimplemented base model, prior-posterior interpolation and the learned prior parameters would strengthen these experiments. Overall, the very strong improvements on the text modeling task over NVDM seem hard to understand, and I would like to see an ablation analysis of all the differences between that model and the proposed one. The fact that adding more constant components helps for document modeling is interesting, and it would be nice to see more qualitative analysis of what the prior modes represent. I also would be surprised if posterior modes were highly separated, and if they were it would be interesting to explore if they corresponded to e.g. ambiguous word-senses. The experiments on dialog modeling are mostly negative results, quantitatively. The observation that the the piecewise constant variables encode time-related words and the Gaussian variables encode sentiment is interesting, especially since it occurs in both sets of experiments. This is actually quite interesting, and I would be interested in seeing analysis of why this is the case. As above, I would like to see an analysis of the sorts of words that are encoded in the different prior modes and whether they correspond to e.g. groups of similar holidays or days. In conclusion, I think the piecewise constant variational family is a good idea, although it is not well-motivated by the paper. The experimental results are very good for document modeling, but without ablation analysis against the baseline it is hard to see why they should be with such a small modification in G-NVDM. The fact that H-NVDM performs better is interesting, though. This paper should better motivate the need for different types of multi-modality, and demonstrate that those sorts of things are actually being captured by the model. As it is, the paper introduces an interesting variational family and shows that it performs better for some tasks, but the motivation and analysis is not clearly focused. To demonstrate that this is a broadly applicable family, it would also be good to do experiments on a more standard datasets like MNIST. Even without an absolute log-likelihood improvement, if the method yielded interpretable multiple modes this would be a valuable contribution.", "role": "assistant" } ]
[ "The authors introduce some new prior and approximate posterior families for variational autoencoders, which are compatible with the reparameterization trick, as well as being capable of expressing multiple modes.", "They also introduce a gating mechanism between prior and posterior.", "They show improvements on bag of words document modeling, and dialogue response generation.", "The original abstract is overly strong in its assertion that a unimodal latent prior p(z) cannot fit a multimodal marginal int_z p(x|z)p(x)dz with a DNN response model p(x|z) (*it cannot possibly capture more complex aspects of the data distribution*, *critical restriction*, etc).", "While the assertion that a unimodal latent prior is necessary to model multimodal observations is false, there are sensible motivations for the piecewise constant prior and posterior.", "For example, if we think of a VAE as a sort of regularized autoencoder where codes are constrained to *fill up* parts of the prior latent space, then there is a sphere-packing argument to be made that filling a Gaussian prior with Gaussian posteriors is a bad use of code space.", "Although the authors don*t explore this much, a hypercube-based tiling of latent code space is a sensible idea.", "As stated, I found the message of the paper to be quite sloppy with respect to the concept of *multi-modality.", "* There are 3 types of multimodality at play here: multimodality in the observed marginal distribution p(x), which can be captured by any deep latent Gaussian model, multimodality in the prior p(z), which makes sense in some situations (e.g.", "a model of MNIST digits could have 10 prior modes corresponding to latent codes for each digit class), and multimodality in the posterior z for a given observation x_i, q(z_i|x_i).", "The final type of multimodality is harder to argue for, except in so far as it allows the expression of flexibly shaped distributions without highly separated modes.", "I believe flexible posterior approximations are important to enable fine-grained and efficient tiling of latent space, but I don*t think these need to have multiple strong modes.", "I would be interested to see experiments demonstrating otherwise for real world data.", "I think this paper should be more clear about the different types of multi-modality and which parts of their analysis demonstrate which ones.", "I also found it unsatisfactory that the piecewise variable analysis did not show different components of the multi-modal prior corresponding to different words, but rather just a separation between the Gaussian and the piecewise variables.", "As I mention in my earlier questions, I found it surprising that the learned variance and mean for the Gaussian prior helps so dramatically with G-NVDM likelihood when the powerful networks transforming to and from latent space should make it scale-invariant.", "Explicitly separating out the contributions of a reimplemented base model, prior-posterior interpolation and the learned prior parameters would strengthen these experiments.", "Overall, the very strong improvements on the text modeling task over NVDM seem hard to understand, and I would like to see an ablation analysis of all the differences between that model and the proposed one.", "The fact that adding more constant components helps for document modeling is interesting, and it would be nice to see more qualitative analysis of what the prior modes represent.", "I also would be surprised if posterior modes were highly separated, and if they were it would be interesting to explore if they corresponded to e.g.", "ambiguous word-senses.", "The experiments on dialog modeling are mostly negative results, quantitatively.", "The observation that the the piecewise constant variables encode time-related words and the Gaussian variables encode sentiment is interesting, especially since it occurs in both sets of experiments.", "This is actually quite interesting, and I would be interested in seeing analysis of why this is the case.", "As above, I would like to see an analysis of the sorts of words that are encoded in the different prior modes and whether they correspond to e.g.", "groups of similar holidays or days.", "In conclusion, I think the piecewise constant variational family is a good idea, although it is not well-motivated by the paper.", "The experimental results are very good for document modeling, but without ablation analysis against the baseline it is hard to see why they should be with such a small modification in G-NVDM.", "The fact that H-NVDM performs better is interesting, though.", "This paper should better motivate the need for different types of multi-modality, and demonstrate that those sorts of things are actually being captured by the model.", "As it is, the paper introduces an interesting variational family and shows that it performs better for some tasks, but the motivation and analysis is not clearly focused.", "To demonstrate that this is a broadly applicable family, it would also be good to do experiments on a more standard datasets like MNIST.", "Even without an absolute log-likelihood improvement, if the method yielded interpretable multiple modes this would be a valuable contribution." ]
{ "criticism": 9, "example": 1, "importance_and_relevance": 7, "materials_and_methods": 25, "praise": 8, "presentation_and_reporting": 3, "results_and_discussion": 8, "suggestion_and_solution": 11, "total": 33 }
0.272727
0.030303
0.212121
0.757576
0.242424
0.090909
0.242424
0.333333
33
1
1
1
1
1
1
1
1
2.181818
-4.131154
6.312972
Reference-Aware Language Models
We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or dis- course context, even when the targets of the reference may be rare words. Experiments on three tasks show our model variants outperform models based on deterministic attention.
This paper presents a new type of language model that treats entity references as latent variables. The paper is structured as three specialized models for three applications: dialog generation with references to database entries, recipe generation with references to ingredients, and text generation with coreference mentions. Despite some opaqueness in details that I will discuss later, the paper does a great job making the main idea coming through, which I think is quite interesting and definitely worth pursuing further. But it seems the paper was rushed into the deadline, as there are a few major weaknesses. The first major weakness is that the claimed latent variables are hardly latent in the actual empirical evaluation. As clarified by the authors via pre-review QAs, all mentions were assumed to be given to all model variants, and so, it would seem like an over-claim to call these variables as latent when they are in fact treated as observed variables. Is it because the models with latent variables were too difficult to train right? A related problem is the use of perplexity as an evaluation measure when comparing reference-aware language models to vanilla language models. Essentially the authors are comparing two language models defined over different event space, which is not a fair comparison. Because mentions were assumed to be given for the reference-aware language models, and because of the fact that mention generators are designed similar to a pointer network, the probability scores over mentions will naturally be higher, compared to the regular language model that needs to consider a much bigger vocabulary set. The effect is analogous to comparing language models with aggressive UNK (and a small vocabulary set) to a language models with no UNK (and a much larger vocabulary set). To mitigate this problem, the authors need to perform one of the following additional evaluations: either assuming no mention boundaries and marginalizing over all possibilities (treating latent variables as truly latent), or showing other types of evaluation beyond perplexity, for example, BLEU, METEOR, human evaluation etc on the corresponding generation task. The other major weakness is writing in terms of technical accuracy and completeness. I found many details opaque and confusing even after QAs. I wonder if the main challenge that hinders the quality of writing has something to do with having three very specialized models in one paper, each having a lot of details to be worked out, which may have not been extremely important for the main story of the paper, but nonetheless not negligible in order to understand what is going on with the paper. Perhaps the authors can restructure the paper so that the most important details are clearly worked out in the main body of the paper, especially in terms of latent variable handling — how to make mention detection and conference resolution truly latent, and if and when entity update helps, which in the current version is not elaborated at all, as it is mentioned only very briefly for the third application (coreference resolution) without any empirical comparisons to motivate the update operation.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nReference-Aware Language Models\nWe propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or dis- course context, even when the targets of the reference may be rare words. Experiments on three tasks show our model variants outperform models based on deterministic attention.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a new type of language model that treats entity references as latent variables. The paper is structured as three specialized models for three applications: dialog generation with references to database entries, recipe generation with references to ingredients, and text generation with coreference mentions. Despite some opaqueness in details that I will discuss later, the paper does a great job making the main idea coming through, which I think is quite interesting and definitely worth pursuing further. But it seems the paper was rushed into the deadline, as there are a few major weaknesses. The first major weakness is that the claimed latent variables are hardly latent in the actual empirical evaluation. As clarified by the authors via pre-review QAs, all mentions were assumed to be given to all model variants, and so, it would seem like an over-claim to call these variables as latent when they are in fact treated as observed variables. Is it because the models with latent variables were too difficult to train right? A related problem is the use of perplexity as an evaluation measure when comparing reference-aware language models to vanilla language models. Essentially the authors are comparing two language models defined over different event space, which is not a fair comparison. Because mentions were assumed to be given for the reference-aware language models, and because of the fact that mention generators are designed similar to a pointer network, the probability scores over mentions will naturally be higher, compared to the regular language model that needs to consider a much bigger vocabulary set. The effect is analogous to comparing language models with aggressive UNK (and a small vocabulary set) to a language models with no UNK (and a much larger vocabulary set). To mitigate this problem, the authors need to perform one of the following additional evaluations: either assuming no mention boundaries and marginalizing over all possibilities (treating latent variables as truly latent), or showing other types of evaluation beyond perplexity, for example, BLEU, METEOR, human evaluation etc on the corresponding generation task. The other major weakness is writing in terms of technical accuracy and completeness. I found many details opaque and confusing even after QAs. I wonder if the main challenge that hinders the quality of writing has something to do with having three very specialized models in one paper, each having a lot of details to be worked out, which may have not been extremely important for the main story of the paper, but nonetheless not negligible in order to understand what is going on with the paper. Perhaps the authors can restructure the paper so that the most important details are clearly worked out in the main body of the paper, especially in terms of latent variable handling — how to make mention detection and conference resolution truly latent, and if and when entity update helps, which in the current version is not elaborated at all, as it is mentioned only very briefly for the third application (coreference resolution) without any empirical comparisons to motivate the update operation.", "role": "assistant" } ]
[ "This paper presents a new type of language model that treats entity references as latent variables.", "The paper is structured as three specialized models for three applications: dialog generation with references to database entries, recipe generation with references to ingredients, and text generation with coreference mentions.", "Despite some opaqueness in details that I will discuss later, the paper does a great job making the main idea coming through, which I think is quite interesting and definitely worth pursuing further.", "But it seems the paper was rushed into the deadline, as there are a few major weaknesses.", "The first major weakness is that the claimed latent variables are hardly latent in the actual empirical evaluation.", "As clarified by the authors via pre-review QAs, all mentions were assumed to be given to all model variants, and so, it would seem like an over-claim to call these variables as latent when they are in fact treated as observed variables.", "Is it because the models with latent variables were too difficult to train right?", "A related problem is the use of perplexity as an evaluation measure when comparing reference-aware language models to vanilla language models.", "Essentially the authors are comparing two language models defined over different event space, which is not a fair comparison.", "Because mentions were assumed to be given for the reference-aware language models, and because of the fact that mention generators are designed similar to a pointer network, the probability scores over mentions will naturally be higher, compared to the regular language model that needs to consider a much bigger vocabulary set.", "The effect is analogous to comparing language models with aggressive UNK (and a small vocabulary set) to a language models with no UNK (and a much larger vocabulary set).", "To mitigate this problem, the authors need to perform one of the following additional evaluations: either assuming no mention boundaries and marginalizing over all possibilities (treating latent variables as truly latent), or showing other types of evaluation beyond perplexity, for example, BLEU, METEOR, human evaluation etc on the corresponding generation task.", "The other major weakness is writing in terms of technical accuracy and completeness.", "I found many details opaque and confusing even after QAs.", "I wonder if the main challenge that hinders the quality of writing has something to do with having three very specialized models in one paper, each having a lot of details to be worked out, which may have not been extremely important for the main story of the paper, but nonetheless not negligible in order to understand what is going on with the paper.", "Perhaps the authors can restructure the paper so that the most important details are clearly worked out in the main body of the paper, especially in terms of latent variable handling — how to make mention detection and conference resolution truly latent, and if and when entity update helps, which in the current version is not elaborated at all, as it is mentioned only very briefly for the third application (coreference resolution) without any empirical comparisons to motivate the update operation." ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 12, "praise": 1, "presentation_and_reporting": 8, "results_and_discussion": 4, "suggestion_and_solution": 3, "total": 16 }
0.4375
0
0.0625
0.75
0.0625
0.5
0.25
0.1875
16
1
0
1
1
1
1
1
1
2.25
2.107958
0.142042
Unsupervised Pretraining for Sequence to Sequence Learning
This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main result is that the pretraining accelerates training and improves generalization of seq2seq models, achieving state-of-the-art results on the WMT English->German task, surpassing a range of methods using both phrase-based machine translation and neural machine translation. Our method achieves an improvement of 1.3 BLEU from the previous best models on both WMT*14 and WMT*15 English->German. On summarization, our method beats the supervised learning baseline.
Authors propose the use of layer-wise language model-like pretraining for encoder-decoder models. This allows to leverage separate source and target corpora (in unsupervised manner) without necessity of large amounts of parallel training corpora. The idea is in principle fairly simple, and rely on initial optimising both encoder and decoder with LSTM tasked to perform language modelling. The ideas are not new, and the paper is more like a successful compilation of several approaches that have been around for some time. The experimental validation, though, offers some interesting insights into importance of initialization, and the effectiveness of different initialisations approaches in enc-dec setting. The regulariser you propose to use on page 3, looks like typical multi-task objective function, especially it is used in an alternating manner would be interesting to see whether similar performance might have been obtained starting with this objective, from random initialisation. You should probably give credit for encoder-decoder like-RNN models published in 1990s. Minors: Pg. 2, Sec 2.1 2nd paragraph: can be different sizes -> can be of different sizes
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnsupervised Pretraining for Sequence to Sequence Learning\nThis work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main result is that the pretraining accelerates training and improves generalization of seq2seq models, achieving state-of-the-art results on the WMT English->German task, surpassing a range of methods using both phrase-based machine translation and neural machine translation. Our method achieves an improvement of 1.3 BLEU from the previous best models on both WMT*14 and WMT*15 English->German. On summarization, our method beats the supervised learning baseline.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nAuthors propose the use of layer-wise language model-like pretraining for encoder-decoder models. This allows to leverage separate source and target corpora (in unsupervised manner) without necessity of large amounts of parallel training corpora. The idea is in principle fairly simple, and rely on initial optimising both encoder and decoder with LSTM tasked to perform language modelling. The ideas are not new, and the paper is more like a successful compilation of several approaches that have been around for some time. The experimental validation, though, offers some interesting insights into importance of initialization, and the effectiveness of different initialisations approaches in enc-dec setting. The regulariser you propose to use on page 3, looks like typical multi-task objective function, especially it is used in an alternating manner would be interesting to see whether similar performance might have been obtained starting with this objective, from random initialisation. You should probably give credit for encoder-decoder like-RNN models published in 1990s. Minors: Pg. 2, Sec 2.1 2nd paragraph: can be different sizes -> can be of different sizes", "role": "assistant" } ]
[ "Authors propose the use of layer-wise language model-like pretraining for encoder-decoder models.", "This allows to leverage separate source and target corpora (in unsupervised manner) without necessity of large amounts of parallel training corpora.", "The idea is in principle fairly simple, and rely on initial optimising both encoder and decoder with LSTM tasked to perform language modelling.", "The ideas are not new, and the paper is more like a successful compilation of several approaches that have been around for some time.", "The experimental validation, though, offers some interesting insights into importance of initialization, and the effectiveness of different initialisations approaches in enc-dec setting.", "The regulariser you propose to use on page 3, looks like typical multi-task objective function, especially it is used in an alternating manner would be interesting to see whether similar performance might have been obtained starting with this objective, from random initialisation.", "You should probably give credit for encoder-decoder like-RNN models published in 1990s.", "Minors: Pg.", "2, Sec 2.1 2nd paragraph: can be different sizes -> can be of different sizes" ]
{ "criticism": 1, "example": 2, "importance_and_relevance": 2, "materials_and_methods": 7, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 3, "total": 9 }
0.111111
0.222222
0.222222
0.777778
0.111111
0.333333
0.111111
0.333333
9
1
1
1
1
1
1
1
1
2.222222
1.969703
0.252519
BIOACOUSTIC SEGMENTATION BY HIERARCHICAL DIRICHLET PROCESS HIDDEN MARKOV MODEL
Understanding the communication between different animals by analysing their acoustic signals is an important topic in bioacoustics. It can be a powerful tool for the preservation of ecological diversity. We investigate probabilistic models to analyse signals issued from real-world bioacoustic sound scenes. We study a Bayesian non-parametric sequential models based on Hierarchical Dirichlet Process Hidden Markov Models (HDP-HMM). The model is able to infer hidden states, that are referred here as song units. However, using such a model raise one main issue: defining the number of hidden states the model has to learn. In bioacoustic problems we often do not know the number of song units (unlike in human speech recognition). Hence, we work with the Hierarchical Dirichlet Process (HDP)-HMM, which is a Bayesian non-parametric (BNP) model that offers a way to tackle this challenging problem. We focus our work on unsupervised learning from bioacoustic data. It consists in simultaneously finding the structure of hidden song units and automatically infer the unknown number of the hidden states to represent the data. Two real bioacoustic sound scene applications are investigated in this work: on whale and multi-species birds segmentation. The learning of these models is proceeded by using Markov-Chain Monte Carlo (MCMC) sampling techniques on Mel Frequency Cepstral Coefficients (MFCC) of audio signals. The results show an interesting song unit segmentation of the bioacoustic signals and open new insights for unsupervised analysis of such signals. This paper illustrates the potential of chunking non-human animal signals into structured parts. This can yield to a new species representation and help experts to better understand the behaviour of such species as Kershenbaum et al. (2014) wanted.
This paper applies HDP-HMM to challenging bioacoustics segmentation problems including humpback whole sound and bird sound segmentation. Although the technique itself is not novel, the application of this data-driven method to bioacoustics segmentation is quite challenging, and may yield some scientific findings, and this is a valuable contribution to the bioacoustics field. My concern for this paper is that it does not have fair comparison of the other simple methods including BIC and AIC, and it is better to provide such comparisons. Especially, as the authors pointed out, the computational cost of HDP-HMM is a big issue, and the other simple methods may solve this issue.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBIOACOUSTIC SEGMENTATION BY HIERARCHICAL DIRICHLET PROCESS HIDDEN MARKOV MODEL\nUnderstanding the communication between different animals by analysing their acoustic signals is an important topic in bioacoustics. It can be a powerful tool for the preservation of ecological diversity. We investigate probabilistic models to analyse signals issued from real-world bioacoustic sound scenes. We study a Bayesian non-parametric sequential models based on Hierarchical Dirichlet Process Hidden Markov Models (HDP-HMM). The model is able to infer hidden states, that are referred here as song units. However, using such a model raise one main issue: defining the number of hidden states the model has to learn. In bioacoustic problems we often do not know the number of song units (unlike in human speech recognition). Hence, we work with the Hierarchical Dirichlet Process (HDP)-HMM, which is a Bayesian non-parametric (BNP) model that offers a way to tackle this challenging problem. We focus our work on unsupervised learning from bioacoustic data. It consists in simultaneously finding the structure of hidden song units and automatically infer the unknown number of the hidden states to represent the data. Two real bioacoustic sound scene applications are investigated in this work: on whale and multi-species birds segmentation. The learning of these models is proceeded by using Markov-Chain Monte Carlo (MCMC) sampling techniques on Mel Frequency Cepstral Coefficients (MFCC) of audio signals. The results show an interesting song unit segmentation of the bioacoustic signals and open new insights for unsupervised analysis of such signals. This paper illustrates the potential of chunking non-human animal signals into structured parts. This can yield to a new species representation and help experts to better understand the behaviour of such species as Kershenbaum et al. (2014) wanted.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper applies HDP-HMM to challenging bioacoustics segmentation problems including humpback whole sound and bird sound segmentation. Although the technique itself is not novel, the application of this data-driven method to bioacoustics segmentation is quite challenging, and may yield some scientific findings, and this is a valuable contribution to the bioacoustics field. My concern for this paper is that it does not have fair comparison of the other simple methods including BIC and AIC, and it is better to provide such comparisons. Especially, as the authors pointed out, the computational cost of HDP-HMM is a big issue, and the other simple methods may solve this issue.", "role": "assistant" } ]
[ "This paper applies HDP-HMM to challenging bioacoustics segmentation problems including humpback whole sound and bird sound segmentation.", "Although the technique itself is not novel, the application of this data-driven method to bioacoustics segmentation is quite challenging, and may yield some scientific findings, and this is a valuable contribution to the bioacoustics field.", "My concern for this paper is that it does not have fair comparison of the other simple methods including BIC and AIC, and it is better to provide such comparisons.", "Especially, as the authors pointed out, the computational cost of HDP-HMM is a big issue, and the other simple methods may solve this issue." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 3, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 4 }
0.5
0
0.5
0.75
0.25
0
0.25
0.25
4
1
0
1
1
1
0
1
1
2.5
1.221623
1.278377
Learning Recurrent Span Representations for Extractive Question Answering
The reading comprehension task, that asks questions about a given evidence document, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-defined manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently released the SQUAD dataset in which the answers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network. We show that scoring explicit span representations significantly improves performance over other approaches that factor the prediction into separate predictions about words or start and end markers. Our approach improves upon the best published results of Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.’s baseline by > 50%.
This paper proposes RaSoR, a method to efficiently representing and scoring all possible spans in an extractive QA task. While the test set results on SQuAD have not been released, it looks likely that they are not going to be state-of-the-art; with that said, the idea of enumerating all possible spans proposed in this paper could potentially improve many architectures. The paper is very well-written and the analysis/ablations in the final sections are mostly interesting (especially Figure 2, which confirms what we would intuitively believe). Based on its potential to positively impact other researchers working on SQuAD, I recommend that the paper is accepted.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Recurrent Span Representations for Extractive Question Answering\nThe reading comprehension task, that asks questions about a given evidence document, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-defined manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently released the SQUAD dataset in which the answers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network. We show that scoring explicit span representations significantly improves performance over other approaches that factor the prediction into separate predictions about words or start and end markers. Our approach improves upon the best published results of Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.’s baseline by > 50%.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes RaSoR, a method to efficiently representing and scoring all possible spans in an extractive QA task. While the test set results on SQuAD have not been released, it looks likely that they are not going to be state-of-the-art; with that said, the idea of enumerating all possible spans proposed in this paper could potentially improve many architectures. The paper is very well-written and the analysis/ablations in the final sections are mostly interesting (especially Figure 2, which confirms what we would intuitively believe). Based on its potential to positively impact other researchers working on SQuAD, I recommend that the paper is accepted.", "role": "assistant" } ]
[ "This paper proposes RaSoR, a method to efficiently representing and scoring all possible spans in an extractive QA task.", "While the test set results on SQuAD have not been released, it looks likely that they are not going to be state-of-the-art; with that said, the idea of enumerating all possible spans proposed in this paper could potentially improve many architectures.", "The paper is very well-written and the analysis/ablations in the final sections are mostly interesting (especially Figure 2, which confirms what we would intuitively believe).", "Based on its potential to positively impact other researchers working on SQuAD, I recommend that the paper is accepted." ]
{ "criticism": 0, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 3, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 4 }
0
0.25
0.5
0.75
0.5
0.25
0.25
0.5
4
0
1
1
1
1
1
1
1
3
1.721623
1.278377
A Neural Stochastic Volatility Model
In this paper, we show that the recent integration of statistical models with recurrent neural networks provides a new way of formulating volatility models that have been popular in time series analysis and prediction. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observable ones. Our focus in this paper is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Our derivations show that some popular volatility models are a special case of our proposed neural stochastic volatility model. Experiments demonstrate that the proposed model generates a smoother volatility estimation, and largely outperforms a widely used GARCH model on several metrics about the fitness of the volatility modelling and the accuracy of the prediction.
The authors propose a recurrent variational neural network approach to modelling volatility in financial time series. This model consists of an application of Chung et al.’s (2015) VRNN model to volatility forecasting, wherein a Variational Autoencoder (VAE) structure is repeated at each time step of the series. The paper is well written and easy to follow (although this reviewer suggests applying a spelling checking, since the paper contains a number of harmless typos). The paper’s main technical contribution is to stack two levels of recurrence, one for the latent process and one for the observables. This appears to be a novel, if minor contribution. The larger contribution is methodological, in areas of time series modelling that are both of great practical importance and have hitherto been dominated by rigid functional forms. The demonstration of the applicability and usefulness of general-purpose non-linear models for volatility forecasting would be extremely impactful. I have a few comments and reservations with the paper: 1) Although not mentioned explicitly, the authors’ framework are couched in terms of carrying out one-timestep-ahead forecasts of volatility. However, many applications of volatility models, for instance for derivative pricing, require longer-horizon forecasts. It would be interesting to discuss how this model could be extended to forecast at longer horizons. 2) In Section 4.4, there’s a mention that a GARCH(1,1) is conditionally deterministic. This is true only when forecasting 1 time-step in the future. At longer horizons, the GARCH(1,1) volatility forecast is not deterministic. 3) I was initially unhappy with the limitations of the experimental validation, limited to comparison with a baseline GARCH model. However, the authors provided more comparisons in the revision, which adds to the quality of the results, although the models compared against cannot be considered state of the art. It would be well advised to look into R packages such as `stochvol’ and ‘fGarch’ to get implementations of a variety of models that can serve as useful baselines, and provide convincing evidence that the modelled volatility is indeed substantially better than approaches currently entertained by the finance literature. 4) In Section 5.2, more details should be given on the network, e.g. number of hidden units, as well as the embedding dimension D_E (section 5.4) 5) In Section 5.3, more details should be given on the data generating process for the synthetic data experiments. 6) Some results in the appendix are very puzzling: around jumps in the price series, which are places where the volatility should spike, the model reacts instead by huge drops in the volatility (Figure 4(b) and (c), respectively around time steps 1300 and 1600). This should be explained and discussed. All in all, I think that the paper provides a nice contribution to the art of volatility modelling. In spite of some flaws, it provides a starting point for the broader impact of neural time series processing in the financial community.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Neural Stochastic Volatility Model\nIn this paper, we show that the recent integration of statistical models with recurrent neural networks provides a new way of formulating volatility models that have been popular in time series analysis and prediction. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observable ones. Our focus in this paper is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Our derivations show that some popular volatility models are a special case of our proposed neural stochastic volatility model. Experiments demonstrate that the proposed model generates a smoother volatility estimation, and largely outperforms a widely used GARCH model on several metrics about the fitness of the volatility modelling and the accuracy of the prediction.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors propose a recurrent variational neural network approach to modelling volatility in financial time series. This model consists of an application of Chung et al.’s (2015) VRNN model to volatility forecasting, wherein a Variational Autoencoder (VAE) structure is repeated at each time step of the series. The paper is well written and easy to follow (although this reviewer suggests applying a spelling checking, since the paper contains a number of harmless typos). The paper’s main technical contribution is to stack two levels of recurrence, one for the latent process and one for the observables. This appears to be a novel, if minor contribution. The larger contribution is methodological, in areas of time series modelling that are both of great practical importance and have hitherto been dominated by rigid functional forms. The demonstration of the applicability and usefulness of general-purpose non-linear models for volatility forecasting would be extremely impactful. I have a few comments and reservations with the paper: 1) Although not mentioned explicitly, the authors’ framework are couched in terms of carrying out one-timestep-ahead forecasts of volatility. However, many applications of volatility models, for instance for derivative pricing, require longer-horizon forecasts. It would be interesting to discuss how this model could be extended to forecast at longer horizons. 2) In Section 4.4, there’s a mention that a GARCH(1,1) is conditionally deterministic. This is true only when forecasting 1 time-step in the future. At longer horizons, the GARCH(1,1) volatility forecast is not deterministic. 3) I was initially unhappy with the limitations of the experimental validation, limited to comparison with a baseline GARCH model. However, the authors provided more comparisons in the revision, which adds to the quality of the results, although the models compared against cannot be considered state of the art. It would be well advised to look into R packages such as `stochvol’ and ‘fGarch’ to get implementations of a variety of models that can serve as useful baselines, and provide convincing evidence that the modelled volatility is indeed substantially better than approaches currently entertained by the finance literature. 4) In Section 5.2, more details should be given on the network, e.g. number of hidden units, as well as the embedding dimension D_E (section 5.4) 5) In Section 5.3, more details should be given on the data generating process for the synthetic data experiments. 6) Some results in the appendix are very puzzling: around jumps in the price series, which are places where the volatility should spike, the model reacts instead by huge drops in the volatility (Figure 4(b) and (c), respectively around time steps 1300 and 1600). This should be explained and discussed. All in all, I think that the paper provides a nice contribution to the art of volatility modelling. In spite of some flaws, it provides a starting point for the broader impact of neural time series processing in the financial community.", "role": "assistant" } ]
[ "The authors propose a recurrent variational neural network approach to modelling volatility in financial time series.", "This model consists of an application of Chung et al.’s (2015) VRNN model to volatility forecasting, wherein a Variational Autoencoder (VAE) structure is repeated at each time step of the series.", "The paper is well written and easy to follow (although this reviewer suggests applying a spelling checking, since the paper contains a number of harmless typos).", "The paper’s main technical contribution is to stack two levels of recurrence, one for the latent process and one for the observables.", "This appears to be a novel, if minor contribution.", "The larger contribution is methodological, in areas of time series modelling that are both of great practical importance and have hitherto been dominated by rigid functional forms.", "The demonstration of the applicability and usefulness of general-purpose non-linear models for volatility forecasting would be extremely impactful.", "I have a few comments and reservations with the paper: 1) Although not mentioned explicitly, the authors’ framework are couched in terms of carrying out one-timestep-ahead forecasts of volatility.", "However, many applications of volatility models, for instance for derivative pricing, require longer-horizon forecasts.", "It would be interesting to discuss how this model could be extended to forecast at longer horizons.", "2) In Section 4.4, there’s a mention that a GARCH(1,1) is conditionally deterministic.", "This is true only when forecasting 1 time-step in the future.", "At longer horizons, the GARCH(1,1) volatility forecast is not deterministic.", "3) I was initially unhappy with the limitations of the experimental validation, limited to comparison with a baseline GARCH model.", "However, the authors provided more comparisons in the revision, which adds to the quality of the results, although the models compared against cannot be considered state of the art.", "It would be well advised to look into R packages such as `stochvol’ and ‘fGarch’ to get implementations of a variety of models that can serve as useful baselines, and provide convincing evidence that the modelled volatility is indeed substantially better than approaches currently entertained by the finance literature.", "4) In Section 5.2, more details should be given on the network, e.g.", "number of hidden units, as well as the embedding dimension D_E (section 5.4) 5) In Section 5.3, more details should be given on the data generating process for the synthetic data experiments.", "6) Some results in the appendix are very puzzling: around jumps in the price series, which are places where the volatility should spike, the model reacts instead by huge drops in the volatility (Figure 4(b) and (c), respectively around time steps 1300 and 1600).", "This should be explained and discussed.", "All in all, I think that the paper provides a nice contribution to the art of volatility modelling.", "In spite of some flaws, it provides a starting point for the broader impact of neural time series processing in the financial community." ]
{ "criticism": 4, "example": 2, "importance_and_relevance": 7, "materials_and_methods": 16, "praise": 5, "presentation_and_reporting": 4, "results_and_discussion": 3, "suggestion_and_solution": 7, "total": 22 }
0.181818
0.090909
0.318182
0.727273
0.227273
0.181818
0.136364
0.318182
22
1
1
1
1
1
1
1
1
2.181818
0.903441
1.278377
Parametric Exponential Linear Unit for Deep Convolutional Neural Networks
The activation function is an important component in Convolutional Neural Networks (CNNs). For instance, recent breakthroughs in Deep Learning can be attributed to the Rectified Linear Unit (ReLU). Another recently proposed activation function, the Exponential Linear Unit (ELU), has the supplementary property of reducing bias shift without explicitly centering the values at zero. In this paper, we show that learning a parameterization of ELU improves its performance. We analyzed our proposed Parametric ELU (PELU) in the context of vanishing gradients and provide a gradient-based optimization framework. We conducted several experiments on CIFAR-10/100 and ImageNet with different network architectures, such as NiN, Overfeat, All-CNN and ResNet. Our results show that our PELU has relative error improvements over ELU of 4.45% and 5.68% on CIFAR-10 and 100, and as much as 7.28% with only 0.0003% parameter increase on ImageNet. We also observed that Vgg using PELU tended to prefer activations saturating closer to zero, as in ReLU, except at the last layer, which saturated near -2. Finally, other presented results suggest that varying the shape of the activations during training along with the other parameters helps controlling vanishing gradients and bias shift, thus facilitating learning.
Authors present a parameterized variant of ELU and show that the proposed function helps to deal with vanishing gradients in deep networks in a way better than existing non-linearities. They present both a theoretical analysis and practical validation for presented approach. Interesting observations on statistics of the PELU parameters are reported. Perhaps explanation for the observed evolution of parameters can help better understand the non-linearity. It is hard to evaluate the experimental validation presented given the difference in number of parameters compared to other approaches.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nParametric Exponential Linear Unit for Deep Convolutional Neural Networks\nThe activation function is an important component in Convolutional Neural Networks (CNNs). For instance, recent breakthroughs in Deep Learning can be attributed to the Rectified Linear Unit (ReLU). Another recently proposed activation function, the Exponential Linear Unit (ELU), has the supplementary property of reducing bias shift without explicitly centering the values at zero. In this paper, we show that learning a parameterization of ELU improves its performance. We analyzed our proposed Parametric ELU (PELU) in the context of vanishing gradients and provide a gradient-based optimization framework. We conducted several experiments on CIFAR-10/100 and ImageNet with different network architectures, such as NiN, Overfeat, All-CNN and ResNet. Our results show that our PELU has relative error improvements over ELU of 4.45% and 5.68% on CIFAR-10 and 100, and as much as 7.28% with only 0.0003% parameter increase on ImageNet. We also observed that Vgg using PELU tended to prefer activations saturating closer to zero, as in ReLU, except at the last layer, which saturated near -2. Finally, other presented results suggest that varying the shape of the activations during training along with the other parameters helps controlling vanishing gradients and bias shift, thus facilitating learning.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nAuthors present a parameterized variant of ELU and show that the proposed function helps to deal with vanishing gradients in deep networks in a way better than existing non-linearities. They present both a theoretical analysis and practical validation for presented approach. Interesting observations on statistics of the PELU parameters are reported. Perhaps explanation for the observed evolution of parameters can help better understand the non-linearity. It is hard to evaluate the experimental validation presented given the difference in number of parameters compared to other approaches.", "role": "assistant" } ]
[ "Authors present a parameterized variant of ELU and show that the proposed function helps to deal with vanishing gradients in deep networks in a way better than existing non-linearities.", "They present both a theoretical analysis and practical validation for presented approach.", "Interesting observations on statistics of the PELU parameters are reported.", "Perhaps explanation for the observed evolution of parameters can help better understand the non-linearity.", "It is hard to evaluate the experimental validation presented given the difference in number of parameters compared to other approaches." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 5, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 5 }
0.2
0
0
1
0.4
0.2
0.6
0.2
5
1
0
0
1
1
1
1
1
2.6
1.589924
1.010076
Multiagent System for Layer Free Network
We propose a multiagent system that have feedforward networks as its subset while free from layer structure with matrix-vector scheme. Deep networks are often compared to the brain neocortex or visual perception system. One of the largest difference from human brain is the use of matrix-vector multiplication based on layer architecture. It would help understanding the way human brain works if we manage to develop good deep network model without the layer architecture while preserving their performance. The brain neocortex works as an aggregation of the local level interactions between neurons, which is rather similar to multiagent system consists of autonomous partially observing agents than units aligned in column vectors and manipulated by global level algorithm. Therefore we suppose that it is an effective approach for developing more biologically plausible model while preserving compatibility with deep networks to alternate units with multiple agents. Our method also has advantage in scalability and memory efficiency. We reimplemented Stacked Denoising Autoencoder(SDAE) as a concrete instance with our multiagent system and verified its equivalence with the standard SDAE from both theoritical and empirical perspectives. Additionary, we also proposed a variant of our multiagent SDAE named *Sparse Connect SDAE*, and showed its computational advantage with the MNIST dataset.
The paper reframes feed forward neural networks as a multi-agent system. It seems to start from the wrong premise that multi-layer neural networks were created expressed as full matrix multiplications. This ignores the decades-long history of development of artificial neural networks, inspired by biological neurons, which thus started from units with arbitrarily sparse connectivity envisioned as computing in parallel. The matrix formulation is primarily a notational convenience; note also that when working with sparse matrix operations (or convolutions) zeros are neither stored not multiplied by. Besides the change in terminology, essentially renaming neurons agents, I find the paper brings nothing new and interesting to the table. Pulling in useful insights from a different communitiy such as multi-agent systems would be most welcome. But for this to be compelling, it would have to be largely unheard-of elements in neural net research, with clear supporting empirical evidence that they significantly improve accuracy or efficiency. This is not achieved in the present paper.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMultiagent System for Layer Free Network\nWe propose a multiagent system that have feedforward networks as its subset while free from layer structure with matrix-vector scheme. Deep networks are often compared to the brain neocortex or visual perception system. One of the largest difference from human brain is the use of matrix-vector multiplication based on layer architecture. It would help understanding the way human brain works if we manage to develop good deep network model without the layer architecture while preserving their performance. The brain neocortex works as an aggregation of the local level interactions between neurons, which is rather similar to multiagent system consists of autonomous partially observing agents than units aligned in column vectors and manipulated by global level algorithm. Therefore we suppose that it is an effective approach for developing more biologically plausible model while preserving compatibility with deep networks to alternate units with multiple agents. Our method also has advantage in scalability and memory efficiency. We reimplemented Stacked Denoising Autoencoder(SDAE) as a concrete instance with our multiagent system and verified its equivalence with the standard SDAE from both theoritical and empirical perspectives. Additionary, we also proposed a variant of our multiagent SDAE named *Sparse Connect SDAE*, and showed its computational advantage with the MNIST dataset.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper reframes feed forward neural networks as a multi-agent system. It seems to start from the wrong premise that multi-layer neural networks were created expressed as full matrix multiplications. This ignores the decades-long history of development of artificial neural networks, inspired by biological neurons, which thus started from units with arbitrarily sparse connectivity envisioned as computing in parallel. The matrix formulation is primarily a notational convenience; note also that when working with sparse matrix operations (or convolutions) zeros are neither stored not multiplied by. Besides the change in terminology, essentially renaming neurons agents, I find the paper brings nothing new and interesting to the table. Pulling in useful insights from a different communitiy such as multi-agent systems would be most welcome. But for this to be compelling, it would have to be largely unheard-of elements in neural net research, with clear supporting empirical evidence that they significantly improve accuracy or efficiency. This is not achieved in the present paper.", "role": "assistant" } ]
[ "The paper reframes feed forward neural networks as a multi-agent system.", "It seems to start from the wrong premise that multi-layer neural networks were created expressed as full matrix multiplications.", "This ignores the decades-long history of development of artificial neural networks, inspired by biological neurons, which thus started from units with arbitrarily sparse connectivity envisioned as computing in parallel.", "The matrix formulation is primarily a notational convenience; note also that when working with sparse matrix operations (or convolutions) zeros are neither stored not multiplied by.", "Besides the change in terminology, essentially renaming neurons agents, I find the paper brings nothing new and interesting to the table.", "Pulling in useful insights from a different communitiy such as multi-agent systems would be most welcome.", "But for this to be compelling, it would have to be largely unheard-of elements in neural net research, with clear supporting empirical evidence that they significantly improve accuracy or efficiency.", "This is not achieved in the present paper." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 4, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 8 }
0.375
0
0.5
0.5
0.125
0.25
0.125
0.25
8
1
0
1
1
1
1
1
1
2.125
1.730439
0.394561
Forum
KISSME was a promising metric learning approach that got lost in the deep learning flood. This is a shame since it had very good results at the time, but it was hindered by the quirks of the PCA pre-processing stage. In a nutshell, this paper reimagines this approach using deep convnets, gets good results on modern benchmark datasets and does not require meticulous preprocessing tricks. It therefore deserves to be published, since it rescues a promising approach from obscurity and does a competent job of illustrating its potential in a modern context, with all the necessary discussion of related work on old (LMNN, MMC, PCCA) and new (deep, triplet-loss based) metric learning approaches.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nForum\n", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nKISSME was a promising metric learning approach that got lost in the deep learning flood. This is a shame since it had very good results at the time, but it was hindered by the quirks of the PCA pre-processing stage. In a nutshell, this paper reimagines this approach using deep convnets, gets good results on modern benchmark datasets and does not require meticulous preprocessing tricks. It therefore deserves to be published, since it rescues a promising approach from obscurity and does a competent job of illustrating its potential in a modern context, with all the necessary discussion of related work on old (LMNN, MMC, PCCA) and new (deep, triplet-loss based) metric learning approaches.", "role": "assistant" } ]
[ "KISSME was a promising metric learning approach that got lost in the deep learning flood.", "This is a shame since it had very good results at the time, but it was hindered by the quirks of the PCA pre-processing stage.", "In a nutshell, this paper reimagines this approach using deep convnets, gets good results on modern benchmark datasets and does not require meticulous preprocessing tricks.", "It therefore deserves to be published, since it rescues a promising approach from obscurity and does a competent job of illustrating its potential in a modern context, with all the necessary discussion of related work on old (LMNN, MMC, PCCA) and new (deep, triplet-loss based) metric learning approaches." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 4, "praise": 4, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 4 }
0.25
0
0.5
1
1
0
0.75
0.25
4
1
0
1
1
1
0
1
1
3.75
2.471623
1.278377
Cortical-Inspired Open-Bigram Representation for Handwritten Word Recognition
Recent research in the cognitive process of reading hypothesized that we do not read words by sequentially recognizing letters, but rather by identifing open-bigrams, i.e. couple of letters that are not necessarily next to each other. In this paper, we evaluate an handwritten word recognition method based on original open-bigrams representation. We trained Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs) to predict open-bigrams rather than characters, and we show that such models are able to learn the long-range, complicated and intertwined dependencies in the input signal, necessary to the prediction. For decoding, we decomposed each word of a large vocabulary into the set of constituent bigrams, and apply a simple cosine similarity measure between this representation and the bagged RNN prediction to retrieve the vocabulary word. We compare this method to standard word recognition techniques based on sequential character recognition. Experiments are carried out on two public databases of handwritten words (Rimes and IAM), an the results with our bigram decoder are comparable to more conventional decoding methods based on sequences of letters.
This submission investigates the usability of cortical-inspired distant bigram representations for handwritten word recognition. Instead of generating neural network based posterior features for character (optionally in local context), sets posterior for character bigrams of different length are used to represent words. The aim here is to investigate the viability of this approach and to compare to the standard approach. Overall, the submission is well written, although information is missing w.r.t. to the comparison between the proposed approach and the standard approach, see below. It would be desirable to see the model complexity of all the different models used here, i.e. the number of parameters used. Language models are not used here. Since the different models utilize different levels of context, language models can be expected to have a different effect on the different approaches. Therefore I suggest to include the use of language models into the evaluation. For your comparative experiments you use only 70% of the data by choosing longer words only. On the other hand, it is well known that the shorter words are more prone to result in misrecognitions. The question remains, if this choice is advantageous for one of the tasks, or not - corresponding quantitative results should be provided to be able to better evaluate the effect of using this constrained corpus. Without clarification of this I would not readily agree that the error rates are competitive or better than the standard approach, as stated at the end of Sec. 5. I do see the motivation for introducing open-bigrams in an unordered way due to the corresponding evidence from cognitive research. However, decision theoretically I wonder, why the order should be given up, if the underlying sequential classification problem clearly is of a monotonous nature. It would be interesting to see an experiment, where only the use of the order is varied, to differentiate the effect of the order from the effect of other aspects of the approach. End of page 1: *whole language method* - please explain what is meant by this. Page 6: define your notation for rnn_d(x,t). The number of target for the RNNs modeling order 0 (unigrams effectively) and the RNNs modeling order 1 and larger are very much different. Therefore the precision and recall numbers in Table 2 do not seem to be readily comparable between order 0 and orders >=1. At least, the column for order 0 should be visually separated to highlight this. Minor comments: a spell check is recommended p. 2: state-of-art -> state-of-the-art p. 2: predict character sequence -> predict a character sequence p. 3, top: Their approach include -> Their approach includes p. 3, top: an handwritten -> a handwritten p. 3, bottom: consituent -> constituent p. 4, top: in classical approach -> in the classical approach p. 4, top: transformed in a vector -> transformed into a vector p. 5: were build -> were built References: first authors name written wrongly: Thodore Bluche -> Theodore Bluche
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCortical-Inspired Open-Bigram Representation for Handwritten Word Recognition\nRecent research in the cognitive process of reading hypothesized that we do not read words by sequentially recognizing letters, but rather by identifing open-bigrams, i.e. couple of letters that are not necessarily next to each other. In this paper, we evaluate an handwritten word recognition method based on original open-bigrams representation. We trained Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs) to predict open-bigrams rather than characters, and we show that such models are able to learn the long-range, complicated and intertwined dependencies in the input signal, necessary to the prediction. For decoding, we decomposed each word of a large vocabulary into the set of constituent bigrams, and apply a simple cosine similarity measure between this representation and the bagged RNN prediction to retrieve the vocabulary word. We compare this method to standard word recognition techniques based on sequential character recognition. Experiments are carried out on two public databases of handwritten words (Rimes and IAM), an the results with our bigram decoder are comparable to more conventional decoding methods based on sequences of letters.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis submission investigates the usability of cortical-inspired distant bigram representations for handwritten word recognition. Instead of generating neural network based posterior features for character (optionally in local context), sets posterior for character bigrams of different length are used to represent words. The aim here is to investigate the viability of this approach and to compare to the standard approach. Overall, the submission is well written, although information is missing w.r.t. to the comparison between the proposed approach and the standard approach, see below. It would be desirable to see the model complexity of all the different models used here, i.e. the number of parameters used. Language models are not used here. Since the different models utilize different levels of context, language models can be expected to have a different effect on the different approaches. Therefore I suggest to include the use of language models into the evaluation. For your comparative experiments you use only 70% of the data by choosing longer words only. On the other hand, it is well known that the shorter words are more prone to result in misrecognitions. The question remains, if this choice is advantageous for one of the tasks, or not - corresponding quantitative results should be provided to be able to better evaluate the effect of using this constrained corpus. Without clarification of this I would not readily agree that the error rates are competitive or better than the standard approach, as stated at the end of Sec. 5. I do see the motivation for introducing open-bigrams in an unordered way due to the corresponding evidence from cognitive research. However, decision theoretically I wonder, why the order should be given up, if the underlying sequential classification problem clearly is of a monotonous nature. It would be interesting to see an experiment, where only the use of the order is varied, to differentiate the effect of the order from the effect of other aspects of the approach. End of page 1: *whole language method* - please explain what is meant by this. Page 6: define your notation for rnn_d(x,t). The number of target for the RNNs modeling order 0 (unigrams effectively) and the RNNs modeling order 1 and larger are very much different. Therefore the precision and recall numbers in Table 2 do not seem to be readily comparable between order 0 and orders >=1. At least, the column for order 0 should be visually separated to highlight this. Minor comments: a spell check is recommended p. 2: state-of-art -> state-of-the-art p. 2: predict character sequence -> predict a character sequence p. 3, top: Their approach include -> Their approach includes p. 3, top: an handwritten -> a handwritten p. 3, bottom: consituent -> constituent p. 4, top: in classical approach -> in the classical approach p. 4, top: transformed in a vector -> transformed into a vector p. 5: were build -> were built References: first authors name written wrongly: Thodore Bluche -> Theodore Bluche", "role": "assistant" } ]
[ "This submission investigates the usability of cortical-inspired distant bigram representations for handwritten word recognition.", "Instead of generating neural network based posterior features for character (optionally in local context), sets posterior for character bigrams of different length are used to represent words.", "The aim here is to investigate the viability of this approach and to compare to the standard approach.", "Overall, the submission is well written, although information is missing w.r.t.", "to the comparison between the proposed approach and the standard approach, see below.", "It would be desirable to see the model complexity of all the different models used here, i.e.", "the number of parameters used.", "Language models are not used here.", "Since the different models utilize different levels of context, language models can be expected to have a different effect on the different approaches.", "Therefore I suggest to include the use of language models into the evaluation.", "For your comparative experiments you use only 70% of the data by choosing longer words only.", "On the other hand, it is well known that the shorter words are more prone to result in misrecognitions.", "The question remains, if this choice is advantageous for one of the tasks, or not - corresponding quantitative results should be provided to be able to better evaluate the effect of using this constrained corpus.", "Without clarification of this I would not readily agree that the error rates are competitive or better than the standard approach, as stated at the end of Sec.", "5.", "I do see the motivation for introducing open-bigrams in an unordered way due to the corresponding evidence from cognitive research.", "However, decision theoretically I wonder, why the order should be given up, if the underlying sequential classification problem clearly is of a monotonous nature.", "It would be interesting to see an experiment, where only the use of the order is varied, to differentiate the effect of the order from the effect of other aspects of the approach.", "End of page 1: *whole language method* - please explain what is meant by this.", "Page 6: define your notation for rnn_d(x,t).", "The number of target for the RNNs modeling order 0 (unigrams effectively) and the RNNs modeling order 1 and larger are very much different.", "Therefore the precision and recall numbers in Table 2 do not seem to be readily comparable between order 0 and orders >=1.", "At least, the column for order 0 should be visually separated to highlight this.", "Minor comments: a spell check is recommended p. 2: state-of-art -> state-of-the-art p. 2: predict character sequence -> predict a character sequence p. 3, top: Their approach include -> Their approach includes p. 3, top: an handwritten -> a handwritten p. 3, bottom: consituent -> constituent p. 4, top: in classical approach -> in the classical approach p. 4, top: transformed in a vector -> transformed into a vector p. 5: were build -> were built References: first authors name written wrongly: Thodore Bluche -> Theodore Bluche" ]
{ "criticism": 4, "example": 4, "importance_and_relevance": 1, "materials_and_methods": 17, "praise": 1, "presentation_and_reporting": 13, "results_and_discussion": 3, "suggestion_and_solution": 8, "total": 24 }
0.166667
0.166667
0.041667
0.708333
0.041667
0.541667
0.125
0.333333
24
1
1
1
1
1
1
1
1
2.125
0.215326
1.909674
Representing inferential uncertainty in deep neural networks through sampling
As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modelling uncertainty is one of the key features of Bayesian methods. Bayesian DNNs that use dropout-based variational distributions and scale to complex tasks have recently been proposed. We evaluate Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (dropconnect). We compare these Bayesian DNNs ability to represent their uncertainty about their outputs through sampling during inference. We tested the calibration of these Bayesian fully connected and convolutional DNNs on two visual inference tasks (MNIST and CIFAR-10). By adding different levels of Gaussian noise to the test images, we assessed how these DNNs represented their uncertainty about regions of input space not covered by the training set. These Bayesian DNNs represented their own uncertainty more accurately than traditional DNNs with a softmax output. We find that sampling of weights, whether Gaussian or Bernoulli, led to more accurate representation of uncertainty compared to sampling of units. However, sampling units using either Gaussian or Bernoulli dropout led to increased convolutional neural network (CNN) classification accuracy. Based on these findings we use both Bernoulli dropout and Gaussian dropconnect concurrently, which approximates the use of a spike-and-slab variational distribution. We find that networks with spike-and-slab sampling combine the advantages of the other methods: they classify with high accuracy and robustly represent the uncertainty of their classifications for all tested architectures.
This paper investigates whether the variational inference interpretation of dropout, as introduced in [Gal & Ghahramani (2016), and Kingma et al (2015)], can lead to good estimates of mode uncertainty outside of the training distribution. This is an area of research that indeed warrants more experimental investigation. One very interesting finding is that MC integration leads to much calibration, thus probably much better out-of-sample prediction, than the more usual. Critique: - As explained in Kingma et al (2015), when using continuous posterior distributions over the weights, the dropout rate can be optimized, leading to better regularization. While the paper is cited in the introduction, this adaptive form of dropout is missing from experiments, without clarification. - Only the dropout rate p=0.5 was used across experiments, while the optimal rate is problem dependent, as found by earlier published work. - No new ideas are presented, and the analysis in the paper is quite limited. As it stands, this would be more appropriate for a workshop.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRepresenting inferential uncertainty in deep neural networks through sampling\nAs deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modelling uncertainty is one of the key features of Bayesian methods. Bayesian DNNs that use dropout-based variational distributions and scale to complex tasks have recently been proposed. We evaluate Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (dropconnect). We compare these Bayesian DNNs ability to represent their uncertainty about their outputs through sampling during inference. We tested the calibration of these Bayesian fully connected and convolutional DNNs on two visual inference tasks (MNIST and CIFAR-10). By adding different levels of Gaussian noise to the test images, we assessed how these DNNs represented their uncertainty about regions of input space not covered by the training set. These Bayesian DNNs represented their own uncertainty more accurately than traditional DNNs with a softmax output. We find that sampling of weights, whether Gaussian or Bernoulli, led to more accurate representation of uncertainty compared to sampling of units. However, sampling units using either Gaussian or Bernoulli dropout led to increased convolutional neural network (CNN) classification accuracy. Based on these findings we use both Bernoulli dropout and Gaussian dropconnect concurrently, which approximates the use of a spike-and-slab variational distribution. We find that networks with spike-and-slab sampling combine the advantages of the other methods: they classify with high accuracy and robustly represent the uncertainty of their classifications for all tested architectures.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper investigates whether the variational inference interpretation of dropout, as introduced in [Gal & Ghahramani (2016), and Kingma et al (2015)], can lead to good estimates of mode uncertainty outside of the training distribution. This is an area of research that indeed warrants more experimental investigation. One very interesting finding is that MC integration leads to much calibration, thus probably much better out-of-sample prediction, than the more usual. Critique: - As explained in Kingma et al (2015), when using continuous posterior distributions over the weights, the dropout rate can be optimized, leading to better regularization. While the paper is cited in the introduction, this adaptive form of dropout is missing from experiments, without clarification. - Only the dropout rate p=0.5 was used across experiments, while the optimal rate is problem dependent, as found by earlier published work. - No new ideas are presented, and the analysis in the paper is quite limited. As it stands, this would be more appropriate for a workshop.", "role": "assistant" } ]
[ "This paper investigates whether the variational inference interpretation of dropout, as introduced in [Gal & Ghahramani (2016), and Kingma et al (2015)], can lead to good estimates of mode uncertainty outside of the training distribution.", "This is an area of research that indeed warrants more experimental investigation.", "One very interesting finding is that MC integration leads to much calibration, thus probably much better out-of-sample prediction, than the more usual.", "Critique: - As explained in Kingma et al (2015), when using continuous posterior distributions over the weights, the dropout rate can be optimized, leading to better regularization.", "While the paper is cited in the introduction, this adaptive form of dropout is missing from experiments, without clarification.", "- Only the dropout rate p=0.5 was used across experiments, while the optimal rate is problem dependent, as found by earlier published work.", "- No new ideas are presented, and the analysis in the paper is quite limited.", "As it stands, this would be more appropriate for a workshop." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 7, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 2, "total": 8 }
0.25
0
0.25
0.875
0.25
0.125
0.375
0.25
8
1
0
1
1
1
1
1
1
2.375
1.980439
0.394561
Making Stochastic Neural Networks from Deterministic Ones
It has been believed that stochastic feedforward neural networks (SFNN) have several advantages beyond deterministic deep neural networks (DNN): they have more expressive power allowing multi-modal mappings and regularize better due to their stochastic nature. However, training SFNN is notoriously harder. In this paper, we aim at developing efficient training methods for large-scale SFNN, in particular using known architectures and pre-trained parameters of DNN. To this end, we propose a new intermediate stochastic model, called Simplified-SFNN, which can be built upon any baseline DNN and approximates certain SFNN by simplifying its upper latent units above stochastic ones. The main novelty of our approach is in establishing the connection between three models, i.e., DNN -> Simplified-SFNN -> SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. Using several popular DNNs, we show how they can be effectively transferred to the corresponding stochastic models for both multi-modal and classification tasks on MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, our stochastic model built from the wide residual network has 28 layers and 36 million parameters, where the former consistently outperforms the latter for the classification tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect.
Strengths - interesting to explore the connection between ReLU DNN and simplified SFNN - small task (MNIST) is used to demonstrate the usefulness of the proposed training methods experimentally - the proposed, multi-stage training methods are simple to implement (despite lacking theoretical rigor) Weaknesses -no results are reported on real tasks with large training set -not clear exploration on the scalability of the learning methods when training data becomes larger -when the hidden layers become stochastic, the model shares uncertainty representation with deep Bayes networks or deep generative models (Deep Discriminative and Generative Models for Pattern Recognition , book chapter in “Pattern Recognition and Computer Vision”, November 2015, Download PDF). Such connections should be discussed, especially wrt the use of uncertainty representation to benefit pattern recognition (i.e. supervised learning via Bayes rule) and to benefit the use of domain knowledge such as “explaining away”. -would like to see connections with variational autoencoder models and training, which is also stochastic with hidden layers
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMaking Stochastic Neural Networks from Deterministic Ones\nIt has been believed that stochastic feedforward neural networks (SFNN) have several advantages beyond deterministic deep neural networks (DNN): they have more expressive power allowing multi-modal mappings and regularize better due to their stochastic nature. However, training SFNN is notoriously harder. In this paper, we aim at developing efficient training methods for large-scale SFNN, in particular using known architectures and pre-trained parameters of DNN. To this end, we propose a new intermediate stochastic model, called Simplified-SFNN, which can be built upon any baseline DNN and approximates certain SFNN by simplifying its upper latent units above stochastic ones. The main novelty of our approach is in establishing the connection between three models, i.e., DNN -> Simplified-SFNN -> SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. Using several popular DNNs, we show how they can be effectively transferred to the corresponding stochastic models for both multi-modal and classification tasks on MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, our stochastic model built from the wide residual network has 28 layers and 36 million parameters, where the former consistently outperforms the latter for the classification tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nStrengths - interesting to explore the connection between ReLU DNN and simplified SFNN - small task (MNIST) is used to demonstrate the usefulness of the proposed training methods experimentally - the proposed, multi-stage training methods are simple to implement (despite lacking theoretical rigor) Weaknesses -no results are reported on real tasks with large training set -not clear exploration on the scalability of the learning methods when training data becomes larger -when the hidden layers become stochastic, the model shares uncertainty representation with deep Bayes networks or deep generative models (Deep Discriminative and Generative Models for Pattern Recognition , book chapter in “Pattern Recognition and Computer Vision”, November 2015, Download PDF). Such connections should be discussed, especially wrt the use of uncertainty representation to benefit pattern recognition (i.e. supervised learning via Bayes rule) and to benefit the use of domain knowledge such as “explaining away”. -would like to see connections with variational autoencoder models and training, which is also stochastic with hidden layers", "role": "assistant" } ]
[ "Strengths - interesting to explore the connection between ReLU DNN and simplified SFNN - small task (MNIST) is used to demonstrate the usefulness of the proposed training methods experimentally - the proposed, multi-stage training methods are simple to implement (despite lacking theoretical rigor) Weaknesses -no results are reported on real tasks with large training set -not clear exploration on the scalability of the learning methods when training data becomes larger -when the hidden layers become stochastic, the model shares uncertainty representation with deep Bayes networks or deep generative models (Deep Discriminative and Generative Models for Pattern Recognition , book chapter in “Pattern Recognition and Computer Vision”, November 2015, Download PDF).", "Such connections should be discussed, especially wrt the use of uncertainty representation to benefit pattern recognition (i.e.", "supervised learning via Bayes rule) and to benefit the use of domain knowledge such as “explaining away”.", "-would like to see connections with variational autoencoder models and training, which is also stochastic with hidden layers" ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 2, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 4 }
0.25
0
0.25
0.5
0.25
0.25
0.5
0.5
4
1
0
1
1
1
1
1
1
2.5
1.221623
1.278377
ReasoNet: Learning to Stop Reading in Machine Comprehension
Teaching a computer to read a document and answer general questions pertaining to the document is a challenging yet unsolved problem. In this paper, we describe a novel neural network architecture called Reasoning Network ({ReasoNet}) for machine comprehension tasks. ReasoNet makes use of multiple turns to effectively exploit and then reason over the relation among queries, documents, and answers. Different from previous approaches using a fixed number of turns during inference, ReasoNet introduces a termination state to relax this constraint on the reasoning depth. With the use of reinforcement learning, ReasoNet can dynamically determine whether to continue the comprehension process after digesting intermediate results, or to terminate reading when it concludes that existing information is adequate to produce an answer. ReasoNet has achieved state-of-the-art performance in machine comprehension datasets, including unstructured CNN and Daily Mail datasets, and a structured Graph Reachability dataset.
The paper aims to consolidate some recent literature in simple types of *reading comprehension* tasks involving matching questions to answers to be found in a passage, and then to explore the types of structure learned by these models and propose modifications. These reading comprehension datasets such as CNN/Daily Mail are on the simpler side because they do not generally involve chains of reasoning over multiple pieces of supporting evidence as can be found in datasets like MCTest. Many models have been proposed for this task, and the paper breaks down these models into *aggregation readers* and *explicit reference readers.* The authors show that the aggregation readers organize their hidden states into a predicate structure which allows them to mimic the explicit reference readers. The authors then experiment with adding linguistic features, including reference features, to the existing models to improve performance. I appreciate the re-naming and re-writing of the paper to make it more clear that the aggregation readers are specifically learning a predicate structure, as well as the inclusion of results about dimensionality of the symbol space. Further, I think the effort to organize and categorize several different reading comprehension models into broader classes is useful, as the field has been producing many such models and the landscape is unclear. The concerns with this paper are that the predicate structure demonstrated is fairly simple, and it is not clear that it provides insight towards the development of better models in the future, since the *explicit reference readers* need not learn it, and the CNN/Daily Mail dataset has very little headroom left as demonstrated by Chen et al. 2016. The desire for *dramatic improvements in performance* mentioned in the discussion section probably cannot be achieved on these datasets. More complex datasets would probably involve multi-hop inference which this paper does not discuss. Further, the message of the paper is a bit scattered and hard to parse, and could benefit from a bit more focus. I think that with the explosion of various competing neural network models for NLP tasks, contributions like this one which attempt to organize and analyze the landscape are valuable, but that this paper might be better suited for an NLP conference or journal such as TACL.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nReasoNet: Learning to Stop Reading in Machine Comprehension\nTeaching a computer to read a document and answer general questions pertaining to the document is a challenging yet unsolved problem. In this paper, we describe a novel neural network architecture called Reasoning Network ({ReasoNet}) for machine comprehension tasks. ReasoNet makes use of multiple turns to effectively exploit and then reason over the relation among queries, documents, and answers. Different from previous approaches using a fixed number of turns during inference, ReasoNet introduces a termination state to relax this constraint on the reasoning depth. With the use of reinforcement learning, ReasoNet can dynamically determine whether to continue the comprehension process after digesting intermediate results, or to terminate reading when it concludes that existing information is adequate to produce an answer. ReasoNet has achieved state-of-the-art performance in machine comprehension datasets, including unstructured CNN and Daily Mail datasets, and a structured Graph Reachability dataset.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper aims to consolidate some recent literature in simple types of *reading comprehension* tasks involving matching questions to answers to be found in a passage, and then to explore the types of structure learned by these models and propose modifications. These reading comprehension datasets such as CNN/Daily Mail are on the simpler side because they do not generally involve chains of reasoning over multiple pieces of supporting evidence as can be found in datasets like MCTest. Many models have been proposed for this task, and the paper breaks down these models into *aggregation readers* and *explicit reference readers.* The authors show that the aggregation readers organize their hidden states into a predicate structure which allows them to mimic the explicit reference readers. The authors then experiment with adding linguistic features, including reference features, to the existing models to improve performance. I appreciate the re-naming and re-writing of the paper to make it more clear that the aggregation readers are specifically learning a predicate structure, as well as the inclusion of results about dimensionality of the symbol space. Further, I think the effort to organize and categorize several different reading comprehension models into broader classes is useful, as the field has been producing many such models and the landscape is unclear. The concerns with this paper are that the predicate structure demonstrated is fairly simple, and it is not clear that it provides insight towards the development of better models in the future, since the *explicit reference readers* need not learn it, and the CNN/Daily Mail dataset has very little headroom left as demonstrated by Chen et al. 2016. The desire for *dramatic improvements in performance* mentioned in the discussion section probably cannot be achieved on these datasets. More complex datasets would probably involve multi-hop inference which this paper does not discuss. Further, the message of the paper is a bit scattered and hard to parse, and could benefit from a bit more focus. I think that with the explosion of various competing neural network models for NLP tasks, contributions like this one which attempt to organize and analyze the landscape are valuable, but that this paper might be better suited for an NLP conference or journal such as TACL.", "role": "assistant" } ]
[ "The paper aims to consolidate some recent literature in simple types of *reading comprehension* tasks involving matching questions to answers to be found in a passage, and then to explore the types of structure learned by these models and propose modifications.", "These reading comprehension datasets such as CNN/Daily Mail are on the simpler side because they do not generally involve chains of reasoning over multiple pieces of supporting evidence as can be found in datasets like MCTest.", "Many models have been proposed for this task, and the paper breaks down these models into *aggregation readers* and *explicit reference readers.", "* The authors show that the aggregation readers organize their hidden states into a predicate structure which allows them to mimic the explicit reference readers.", "The authors then experiment with adding linguistic features, including reference features, to the existing models to improve performance.", "I appreciate the re-naming and re-writing of the paper to make it more clear that the aggregation readers are specifically learning a predicate structure, as well as the inclusion of results about dimensionality of the symbol space.", "Further, I think the effort to organize and categorize several different reading comprehension models into broader classes is useful, as the field has been producing many such models and the landscape is unclear.", "The concerns with this paper are that the predicate structure demonstrated is fairly simple, and it is not clear that it provides insight towards the development of better models in the future, since the *explicit reference readers* need not learn it, and the CNN/Daily Mail dataset has very little headroom left as demonstrated by Chen et al.", "2016.", "The desire for *dramatic improvements in performance* mentioned in the discussion section probably cannot be achieved on these datasets.", "More complex datasets would probably involve multi-hop inference which this paper does not discuss.", "Further, the message of the paper is a bit scattered and hard to parse, and could benefit from a bit more focus.", "I think that with the explosion of various competing neural network models for NLP tasks, contributions like this one which attempt to organize and analyze the landscape are valuable, but that this paper might be better suited for an NLP conference or journal such as TACL." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 9, "praise": 3, "presentation_and_reporting": 4, "results_and_discussion": 3, "suggestion_and_solution": 5, "total": 13 }
0.153846
0
0.230769
0.692308
0.230769
0.307692
0.230769
0.384615
13
1
0
1
1
1
1
1
1
2.230769
2.230769
0
A Way out of the Odyssey: Analyzing and Combining Recent Insights for LSTMs
LSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of architectural modifications for LSTM networks resulting in improved performance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, deep vector averaging (DVA), and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model.
This paper presents three improvements to the standard LSTM architecture used in many neural NLP models: Monte Carlo averaging, embed average pooling, and residual connections. Each of the modifications is trivial to implement, so the paper is definitely of interest to any NLP researchers experimenting with deep learning. With that said, I am concerned about the experiments and their results. The residual connections do not seem to consistently help performance; on SST the vertical residuals help but the lateral residuals hurt, and on IMDB it is the opposite. More fundamentally, there need to be more tasks than just sentiment analysis here. I*m not quite sure why the paper*s focus is on text classification, as any NLP task using an LSTM encoder could conceivably benefit from these modifications. It would be great to see a huge variety of tasks like QA, MT, etc., which would really make the paper much stronger. At this point, while the experiments that are included in the paper are very thorough and the analysis is interesting, there need to be more tasks to convince me that the modifications generalize, so I don*t think the paper is ready for publication.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nA Way out of the Odyssey: Analyzing and Combining Recent Insights for LSTMs\nLSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of architectural modifications for LSTM networks resulting in improved performance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, deep vector averaging (DVA), and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents three improvements to the standard LSTM architecture used in many neural NLP models: Monte Carlo averaging, embed average pooling, and residual connections. Each of the modifications is trivial to implement, so the paper is definitely of interest to any NLP researchers experimenting with deep learning. With that said, I am concerned about the experiments and their results. The residual connections do not seem to consistently help performance; on SST the vertical residuals help but the lateral residuals hurt, and on IMDB it is the opposite. More fundamentally, there need to be more tasks than just sentiment analysis here. I*m not quite sure why the paper*s focus is on text classification, as any NLP task using an LSTM encoder could conceivably benefit from these modifications. It would be great to see a huge variety of tasks like QA, MT, etc., which would really make the paper much stronger. At this point, while the experiments that are included in the paper are very thorough and the analysis is interesting, there need to be more tasks to convince me that the modifications generalize, so I don*t think the paper is ready for publication.", "role": "assistant" } ]
[ "This paper presents three improvements to the standard LSTM architecture used in many neural NLP models: Monte Carlo averaging, embed average pooling, and residual connections.", "Each of the modifications is trivial to implement, so the paper is definitely of interest to any NLP researchers experimenting with deep learning.", "With that said, I am concerned about the experiments and their results.", "The residual connections do not seem to consistently help performance; on SST the vertical residuals help but the lateral residuals hurt, and on IMDB it is the opposite.", "More fundamentally, there need to be more tasks than just sentiment analysis here.", "I*m not quite sure why the paper*s focus is on text classification, as any NLP task using an LSTM encoder could conceivably benefit from these modifications.", "It would be great to see a huge variety of tasks like QA, MT, etc., which would really make the paper much stronger.", "At this point, while the experiments that are included in the paper are very thorough and the analysis is interesting, there need to be more tasks to convince me that the modifications generalize, so I don*t think the paper is ready for publication." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 3, "total": 8 }
0.5
0
0.25
0.75
0.25
0.125
0.25
0.375
8
1
0
1
1
1
1
1
1
2.5
2.105439
0.394561
Binary Paragraph Vectors
Recently Le & Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite using fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture semantics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representations. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection.
This work proposes a model that can learn short binary codes via paragraph vectors to allow fast retrieval of documents. The experiments show that this is superior to semantic hashing. The approach is simple and not very technically interesting. For a code size of 128, the loss compared to a continuous paragraph vector seems moderate. The paper asks the reader to refer to the Salakhutdinov and Hinton paper for the baseline numbers but I think they should be placed in the paper for easy reference. For simplicity, the paper could show the precision at 12.5%, 25% and 50% recall for the proposed model and semantic hashing. It also seems that the semantic hashing paper shows results on RCV2 and not RCV1. RCV1 is twice the size of RCV2 and is English only so it seems that these results are not comparable. It would be interesting to see how many binary bits are required to match the performance of the continuous representation. A comparison to the continuous PV-DBOW trained with bigrams would also make it a more fair comparison. Figure 7 in the paper shows a loss from using the real-binary PV-DBOW. It seems that if a user needed high quality ranking after the retrieval stage and they could afford the extra space and computation, then it would be better for them to use a standard PV-DBOW to obtain the continuous representation at that stage. Minor comments: First line after the introduction: is sheer -> is the sheer 4th line from the bottom of P1: words embeddings -> word embeddings In table 1: What does code size refer to for PV-DBOW? Is this the number of elements in the continuous vector? 5th line from the bottom of P5: W -> We 5th line after section 3.1: covers wide -> covers a wide
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nBinary Paragraph Vectors\nRecently Le & Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite using fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture semantics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representations. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work proposes a model that can learn short binary codes via paragraph vectors to allow fast retrieval of documents. The experiments show that this is superior to semantic hashing. The approach is simple and not very technically interesting. For a code size of 128, the loss compared to a continuous paragraph vector seems moderate. The paper asks the reader to refer to the Salakhutdinov and Hinton paper for the baseline numbers but I think they should be placed in the paper for easy reference. For simplicity, the paper could show the precision at 12.5%, 25% and 50% recall for the proposed model and semantic hashing. It also seems that the semantic hashing paper shows results on RCV2 and not RCV1. RCV1 is twice the size of RCV2 and is English only so it seems that these results are not comparable. It would be interesting to see how many binary bits are required to match the performance of the continuous representation. A comparison to the continuous PV-DBOW trained with bigrams would also make it a more fair comparison. Figure 7 in the paper shows a loss from using the real-binary PV-DBOW. It seems that if a user needed high quality ranking after the retrieval stage and they could afford the extra space and computation, then it would be better for them to use a standard PV-DBOW to obtain the continuous representation at that stage. Minor comments: First line after the introduction: is sheer -> is the sheer 4th line from the bottom of P1: words embeddings -> word embeddings In table 1: What does code size refer to for PV-DBOW? Is this the number of elements in the continuous vector? 5th line from the bottom of P5: W -> We 5th line after section 3.1: covers wide -> covers a wide", "role": "assistant" } ]
[ "This work proposes a model that can learn short binary codes via paragraph vectors to allow fast retrieval of documents.", "The experiments show that this is superior to semantic hashing.", "The approach is simple and not very technically interesting.", "For a code size of 128, the loss compared to a continuous paragraph vector seems moderate.", "The paper asks the reader to refer to the Salakhutdinov and Hinton paper for the baseline numbers but I think they should be placed in the paper for easy reference.", "For simplicity, the paper could show the precision at 12.5%, 25% and 50% recall for the proposed model and semantic hashing.", "It also seems that the semantic hashing paper shows results on RCV2 and not RCV1.", "RCV1 is twice the size of RCV2 and is English only so it seems that these results are not comparable.", "It would be interesting to see how many binary bits are required to match the performance of the continuous representation.", "A comparison to the continuous PV-DBOW trained with bigrams would also make it a more fair comparison.", "Figure 7 in the paper shows a loss from using the real-binary PV-DBOW.", "It seems that if a user needed high quality ranking after the retrieval stage and they could afford the extra space and computation, then it would be better for them to use a standard PV-DBOW to obtain the continuous representation at that stage.", "Minor comments: First line after the introduction: is sheer -> is the sheer 4th line from the bottom of P1: words embeddings -> word embeddings In table 1: What does code size refer to for PV-DBOW?", "Is this the number of elements in the continuous vector?", "5th line from the bottom of P5: W -> We 5th line after section 3.1: covers wide -> covers a wide" ]
{ "criticism": 2, "example": 3, "importance_and_relevance": 1, "materials_and_methods": 11, "praise": 0, "presentation_and_reporting": 8, "results_and_discussion": 2, "suggestion_and_solution": 5, "total": 15 }
0.133333
0.2
0.066667
0.733333
0
0.533333
0.133333
0.333333
15
1
1
1
1
0
1
1
1
2.133333
2.070204
0.06313
Generative Paragraph Vector
The recently introduced Paragraph Vector is an efficient method for learning high-quality distributed representations for pieces of texts. However, an inherent limitation of Paragraph Vector is lack of ability to infer distributed representations for texts outside of the training set. To tackle this problem, we introduce a Generative Paragraph Vector, which can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector with a complete generative process. With the ability to infer the distributed representations for unseen texts, we can further incorporate text labels into the model and turn it into a supervised version, namely Supervised Generative Paragraph Vector. In this way, we can leverage the labels paired with the texts to guide the representation learning, and employ the learned model for prediction tasks directly. Experiments on five text classification benchmark collections show that both model architectures can yield superior classification performance over the state-of-the-art counterparts.
While this paper has some decent accuracy numbers, it is hard to argue for acceptance given the following: 1) motivation based on the incorrect assumption that the Paragraph Vector wouldn*t work on unseen data 2) Numerous basic formatting and Bibtex citation issues. Lack of novelty of yet another standard directed LDA-like bag of words/bigram model.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGenerative Paragraph Vector\nThe recently introduced Paragraph Vector is an efficient method for learning high-quality distributed representations for pieces of texts. However, an inherent limitation of Paragraph Vector is lack of ability to infer distributed representations for texts outside of the training set. To tackle this problem, we introduce a Generative Paragraph Vector, which can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector with a complete generative process. With the ability to infer the distributed representations for unseen texts, we can further incorporate text labels into the model and turn it into a supervised version, namely Supervised Generative Paragraph Vector. In this way, we can leverage the labels paired with the texts to guide the representation learning, and employ the learned model for prediction tasks directly. Experiments on five text classification benchmark collections show that both model architectures can yield superior classification performance over the state-of-the-art counterparts.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nWhile this paper has some decent accuracy numbers, it is hard to argue for acceptance given the following: 1) motivation based on the incorrect assumption that the Paragraph Vector wouldn*t work on unseen data 2) Numerous basic formatting and Bibtex citation issues. Lack of novelty of yet another standard directed LDA-like bag of words/bigram model.", "role": "assistant" } ]
[ "While this paper has some decent accuracy numbers, it is hard to argue for acceptance given the following: 1) motivation based on the incorrect assumption that the Paragraph Vector wouldn*t work on unseen data 2) Numerous basic formatting and Bibtex citation issues.", "Lack of novelty of yet another standard directed LDA-like bag of words/bigram model." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 2, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 0, "suggestion_and_solution": 0, "total": 2 }
1
0
0
1
0.5
1
0
0
2
1
0
0
1
1
1
0
0
3.5
1.590326
1.909674
Non-linear Dimensionality Regularizer for Solving Inverse Problems
Consider an ill-posed inverse problem of estimating causal factors from observations, one of which is known to lie near some (unknown) low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel. Solving this problem requires simultaneous estimation of these factors and learning the low-dimensional representation for them. In this work, we introduce a novel non-linear dimensionality regularization technique for solving such problems without pre-training. We re-formulate Kernel-PCA as an energy minimization problem in which low dimensionality constraints are introduced as regularization terms in the energy. To the best of our knowledge, ours is the first attempt to create a dimensionality regularizer in the KPCA framework. Our approach relies on robustly penalizing the rank of the recovered factors directly in the implicit feature space to create their low-dimensional approximations in closed form. Our approach performs robust KPCA in the presence of missing data and noise. We demonstrate state-of-the-art results on predicting missing entries in the standard oil flow dataset. Additionally, we evaluate our method on the challenging problem of Non-Rigid Structure from Motion and our approach delivers promising results on CMU mocap dataset despite the presence of significant occlusions and noise.
This paper considers an alternate formulation of Kernel PCA with rank constraints incorporated as a regularization term in the objective. The writing is not clear. The focus keeps shifting from estimating “causal factors”, to nonlinear dimensionality reduction to Kernel PCA to ill-posed inverse problems. The problem reformulation of Kernel PCA uses somewhat standard tricks and it is not clear what are the advantages of the proposed approach over the existing methods as there is no theoretical analysis of the overall approach or empirical comparison with existing state-of-the-art. - Not sure what the authors mean by “causal factors”. There is a reference to it in Abstract and in Problem formulation on page 3 without any definition/discussion. - In KPCA, I am not sure why one is interested in step (iii) outlined on page 2 of finding a pre-image for each - Authors outline two key disadvantages of the existing KPCA approach. The first one, that of low-dimensional manifold assumption not holding exactly, has received lots of attention in the machine learning literature. It is common to assume that the data lies near a low-dimensional manifold rather than on a low-dimensional manifold. Second disadvantage is somewhat unclear as finding “a data point (pre-image) corresponding to each projection in the input space” is not a standard step in KPCA. - On page 3, you never define , , . Clearly, they cannot be cartesian products. I have to assume that notation somehow implies N-tuples. - On page 3, Section 2, and are sets. What do you mean by - On page 5, is never defined. - Experiments: None of the standard algorithms for matrix completion such as OptSpace or SVT were considered - Experiments: There is no comparison with alternate existing approaches for Non-rigid structure from motion. - Proof of the main result Theorem 3.1: To get from (16) to (17) using the Holder inequality (as stated) one would end up with a term that involves sum of fourth powers of weights w_{ij}. Why would they equal to one using the orthonormal constraints? It would be useful to give more details here, as I don’t see how the argument goes through at this point.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNon-linear Dimensionality Regularizer for Solving Inverse Problems\nConsider an ill-posed inverse problem of estimating causal factors from observations, one of which is known to lie near some (unknown) low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel. Solving this problem requires simultaneous estimation of these factors and learning the low-dimensional representation for them. In this work, we introduce a novel non-linear dimensionality regularization technique for solving such problems without pre-training. We re-formulate Kernel-PCA as an energy minimization problem in which low dimensionality constraints are introduced as regularization terms in the energy. To the best of our knowledge, ours is the first attempt to create a dimensionality regularizer in the KPCA framework. Our approach relies on robustly penalizing the rank of the recovered factors directly in the implicit feature space to create their low-dimensional approximations in closed form. Our approach performs robust KPCA in the presence of missing data and noise. We demonstrate state-of-the-art results on predicting missing entries in the standard oil flow dataset. Additionally, we evaluate our method on the challenging problem of Non-Rigid Structure from Motion and our approach delivers promising results on CMU mocap dataset despite the presence of significant occlusions and noise.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper considers an alternate formulation of Kernel PCA with rank constraints incorporated as a regularization term in the objective. The writing is not clear. The focus keeps shifting from estimating “causal factors”, to nonlinear dimensionality reduction to Kernel PCA to ill-posed inverse problems. The problem reformulation of Kernel PCA uses somewhat standard tricks and it is not clear what are the advantages of the proposed approach over the existing methods as there is no theoretical analysis of the overall approach or empirical comparison with existing state-of-the-art. - Not sure what the authors mean by “causal factors”. There is a reference to it in Abstract and in Problem formulation on page 3 without any definition/discussion. - In KPCA, I am not sure why one is interested in step (iii) outlined on page 2 of finding a pre-image for each - Authors outline two key disadvantages of the existing KPCA approach. The first one, that of low-dimensional manifold assumption not holding exactly, has received lots of attention in the machine learning literature. It is common to assume that the data lies near a low-dimensional manifold rather than on a low-dimensional manifold. Second disadvantage is somewhat unclear as finding “a data point (pre-image) corresponding to each projection in the input space” is not a standard step in KPCA. - On page 3, you never define , , . Clearly, they cannot be cartesian products. I have to assume that notation somehow implies N-tuples. - On page 3, Section 2, and are sets. What do you mean by - On page 5, is never defined. - Experiments: None of the standard algorithms for matrix completion such as OptSpace or SVT were considered - Experiments: There is no comparison with alternate existing approaches for Non-rigid structure from motion. - Proof of the main result Theorem 3.1: To get from (16) to (17) using the Holder inequality (as stated) one would end up with a term that involves sum of fourth powers of weights w_{ij}. Why would they equal to one using the orthonormal constraints? It would be useful to give more details here, as I don’t see how the argument goes through at this point.", "role": "assistant" } ]
[ "This paper considers an alternate formulation of Kernel PCA with rank constraints incorporated as a regularization term in the objective.", "The writing is not clear.", "The focus keeps shifting from estimating “causal factors”, to nonlinear dimensionality reduction to Kernel PCA to ill-posed inverse problems.", "The problem reformulation of Kernel PCA uses somewhat standard tricks and it is not clear what are the advantages of the proposed approach over the existing methods as there is no theoretical analysis of the overall approach or empirical comparison with existing state-of-the-art.", "- Not sure what the authors mean by “causal factors”.", "There is a reference to it in Abstract and in Problem formulation on page 3 without any definition/discussion.", "- In KPCA, I am not sure why one is interested in step (iii) outlined on page 2 of finding a pre-image for each - Authors outline two key disadvantages of the existing KPCA approach.", "The first one, that of low-dimensional manifold assumption not holding exactly, has received lots of attention in the machine learning literature.", "It is common to assume that the data lies near a low-dimensional manifold rather than on a low-dimensional manifold.", "Second disadvantage is somewhat unclear as finding “a data point (pre-image) corresponding to each projection in the input space” is not a standard step in KPCA.", "- On page 3, you never define , , .", "Clearly, they cannot be cartesian products.", "I have to assume that notation somehow implies N-tuples.", "- On page 3, Section 2, and are sets.", "What do you mean by - On page 5, is never defined.", "- Experiments: None of the standard algorithms for matrix completion such as OptSpace or SVT were considered - Experiments: There is no comparison with alternate existing approaches for Non-rigid structure from motion.", "- Proof of the main result Theorem 3.1: To get from (16) to (17) using the Holder inequality (as stated) one would end up with a term that involves sum of fourth powers of weights w_{ij}.", "Why would they equal to one using the orthonormal constraints?", "It would be useful to give more details here, as I don’t see how the argument goes through at this point." ]
{ "criticism": 8, "example": 5, "importance_and_relevance": 2, "materials_and_methods": 8, "praise": 0, "presentation_and_reporting": 9, "results_and_discussion": 6, "suggestion_and_solution": 3, "total": 19 }
0.421053
0.263158
0.105263
0.421053
0
0.473684
0.315789
0.157895
19
1
1
1
1
0
1
1
1
2.157895
1.589727
0.568167
Training Long Short-Term Memory With Sparsified Stochastic Gradient Descent
Prior work has demonstrated that exploiting the sparsity can dramatically improve the energy efficiency and shrink the memory footprint of Convolutional Neural Networks (CNNs). However, these sparsity-centric optimization techniques might be less effective for Long Short-Term Memory (LSTM) based Recurrent Neural Networks (RNNs), especially for the training phase, because of the significant structural difference between the neurons. To investigate if there is possible sparsity-centric optimization for training LSTM-based RNNs, we studied several applications and observed that there is potential sparsity in the gradients generated in the backward propagation. In this paper, we illustrate why the sparsity exists and propose a simple yet effective thresholding technique to induce further more sparsity during training an LSTM-based RNN training. Experiment results show that the proposed technique can increase the sparsity of linear gate gradients to higher than 80\% without loss of performance, which makes more than 50\% multiply-accumulate (MAC) operations redundant in an entire LSTM training process. These redudant MAC operations can be eliminated by hardware techniques to improve the energy efficiency and training speed of LSTM-based RNNs.
The findings of applying sparsity in the backward gradients for training LSTMs is interesting. But the paper seems incomplete without the proper experimental justification. Only the validation loss is reported which is definitely insufficient. Proper testing results and commonly reported evaluation criterion needs to be included to support the claim of no degradation when applying the proposed sparsity technique. Also actual justification of the gains in terms of speed and efficiency would make the paper much stronger.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining Long Short-Term Memory With Sparsified Stochastic Gradient Descent\nPrior work has demonstrated that exploiting the sparsity can dramatically improve the energy efficiency and shrink the memory footprint of Convolutional Neural Networks (CNNs). However, these sparsity-centric optimization techniques might be less effective for Long Short-Term Memory (LSTM) based Recurrent Neural Networks (RNNs), especially for the training phase, because of the significant structural difference between the neurons. To investigate if there is possible sparsity-centric optimization for training LSTM-based RNNs, we studied several applications and observed that there is potential sparsity in the gradients generated in the backward propagation. In this paper, we illustrate why the sparsity exists and propose a simple yet effective thresholding technique to induce further more sparsity during training an LSTM-based RNN training. Experiment results show that the proposed technique can increase the sparsity of linear gate gradients to higher than 80\\% without loss of performance, which makes more than 50\\% multiply-accumulate (MAC) operations redundant in an entire LSTM training process. These redudant MAC operations can be eliminated by hardware techniques to improve the energy efficiency and training speed of LSTM-based RNNs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe findings of applying sparsity in the backward gradients for training LSTMs is interesting. But the paper seems incomplete without the proper experimental justification. Only the validation loss is reported which is definitely insufficient. Proper testing results and commonly reported evaluation criterion needs to be included to support the claim of no degradation when applying the proposed sparsity technique. Also actual justification of the gains in terms of speed and efficiency would make the paper much stronger.", "role": "assistant" } ]
[ "The findings of applying sparsity in the backward gradients for training LSTMs is interesting.", "But the paper seems incomplete without the proper experimental justification.", "Only the validation loss is reported which is definitely insufficient.", "Proper testing results and commonly reported evaluation criterion needs to be included to support the claim of no degradation when applying the proposed sparsity technique.", "Also actual justification of the gains in terms of speed and efficiency would make the paper much stronger." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 4, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 5 }
0.4
0
0.2
0.8
0.2
0
0.6
0.2
5
1
0
1
1
1
0
1
1
2.4
1.389924
1.010076
Training Long Short-Term Memory With Sparsified Stochastic Gradient Descent
Prior work has demonstrated that exploiting the sparsity can dramatically improve the energy efficiency and shrink the memory footprint of Convolutional Neural Networks (CNNs). However, these sparsity-centric optimization techniques might be less effective for Long Short-Term Memory (LSTM) based Recurrent Neural Networks (RNNs), especially for the training phase, because of the significant structural difference between the neurons. To investigate if there is possible sparsity-centric optimization for training LSTM-based RNNs, we studied several applications and observed that there is potential sparsity in the gradients generated in the backward propagation. In this paper, we illustrate why the sparsity exists and propose a simple yet effective thresholding technique to induce further more sparsity during training an LSTM-based RNN training. Experiment results show that the proposed technique can increase the sparsity of linear gate gradients to higher than 80\% without loss of performance, which makes more than 50\% multiply-accumulate (MAC) operations redundant in an entire LSTM training process. These redudant MAC operations can be eliminated by hardware techniques to improve the energy efficiency and training speed of LSTM-based RNNs.
This paper presents the observation that it is possible to utilize sparse operations in the training of LSTM networks without loss of accuracy. This observation is novel (although not too surprising) to my knowledge, but I must state that I am not very familiar with research on fast RNN implmentations. Minor note: The LSTM language model does not use a *word2vec* layer. It is simply a linear embedding layer. Word2vec is the name of a specific model which is not directly to character level language models. The paper presents the central observation clearly. However, it will be much more convincing if a well known dataset and experiment set up are used, such as Graves (2013) or Sutskever et al (2014), and actual training, validation and test performances are reported. While the main observation is certainly interesting, I think it is not sufficient to be the subject of a full conference paper without implementation (or simulation) and benchmarking of the promised speedups on multiple tasks. For example, how would the gains be affected by various architecture choices? At present this is an interesting technical report and I would like to see more detailed results in the future.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining Long Short-Term Memory With Sparsified Stochastic Gradient Descent\nPrior work has demonstrated that exploiting the sparsity can dramatically improve the energy efficiency and shrink the memory footprint of Convolutional Neural Networks (CNNs). However, these sparsity-centric optimization techniques might be less effective for Long Short-Term Memory (LSTM) based Recurrent Neural Networks (RNNs), especially for the training phase, because of the significant structural difference between the neurons. To investigate if there is possible sparsity-centric optimization for training LSTM-based RNNs, we studied several applications and observed that there is potential sparsity in the gradients generated in the backward propagation. In this paper, we illustrate why the sparsity exists and propose a simple yet effective thresholding technique to induce further more sparsity during training an LSTM-based RNN training. Experiment results show that the proposed technique can increase the sparsity of linear gate gradients to higher than 80\\% without loss of performance, which makes more than 50\\% multiply-accumulate (MAC) operations redundant in an entire LSTM training process. These redudant MAC operations can be eliminated by hardware techniques to improve the energy efficiency and training speed of LSTM-based RNNs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents the observation that it is possible to utilize sparse operations in the training of LSTM networks without loss of accuracy. This observation is novel (although not too surprising) to my knowledge, but I must state that I am not very familiar with research on fast RNN implmentations. Minor note: The LSTM language model does not use a *word2vec* layer. It is simply a linear embedding layer. Word2vec is the name of a specific model which is not directly to character level language models. The paper presents the central observation clearly. However, it will be much more convincing if a well known dataset and experiment set up are used, such as Graves (2013) or Sutskever et al (2014), and actual training, validation and test performances are reported. While the main observation is certainly interesting, I think it is not sufficient to be the subject of a full conference paper without implementation (or simulation) and benchmarking of the promised speedups on multiple tasks. For example, how would the gains be affected by various architecture choices? At present this is an interesting technical report and I would like to see more detailed results in the future.", "role": "assistant" } ]
[ "This paper presents the observation that it is possible to utilize sparse operations in the training of LSTM networks without loss of accuracy.", "This observation is novel (although not too surprising) to my knowledge, but I must state that I am not very familiar with research on fast RNN implmentations.", "Minor note: The LSTM language model does not use a *word2vec* layer.", "It is simply a linear embedding layer.", "Word2vec is the name of a specific model which is not directly to character level language models.", "The paper presents the central observation clearly.", "However, it will be much more convincing if a well known dataset and experiment set up are used, such as Graves (2013) or Sutskever et al (2014), and actual training, validation and test performances are reported.", "While the main observation is certainly interesting, I think it is not sufficient to be the subject of a full conference paper without implementation (or simulation) and benchmarking of the promised speedups on multiple tasks.", "For example, how would the gains be affected by various architecture choices?", "At present this is an interesting technical report and I would like to see more detailed results in the future." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 6, "praise": 3, "presentation_and_reporting": 2, "results_and_discussion": 4, "suggestion_and_solution": 2, "total": 10 }
0.2
0
0.3
0.6
0.3
0.2
0.4
0.2
10
1
0
1
1
1
1
1
1
2.2
2.057958
0.142042
Training Long Short-Term Memory With Sparsified Stochastic Gradient Descent
Prior work has demonstrated that exploiting the sparsity can dramatically improve the energy efficiency and shrink the memory footprint of Convolutional Neural Networks (CNNs). However, these sparsity-centric optimization techniques might be less effective for Long Short-Term Memory (LSTM) based Recurrent Neural Networks (RNNs), especially for the training phase, because of the significant structural difference between the neurons. To investigate if there is possible sparsity-centric optimization for training LSTM-based RNNs, we studied several applications and observed that there is potential sparsity in the gradients generated in the backward propagation. In this paper, we illustrate why the sparsity exists and propose a simple yet effective thresholding technique to induce further more sparsity during training an LSTM-based RNN training. Experiment results show that the proposed technique can increase the sparsity of linear gate gradients to higher than 80\% without loss of performance, which makes more than 50\% multiply-accumulate (MAC) operations redundant in an entire LSTM training process. These redudant MAC operations can be eliminated by hardware techniques to improve the energy efficiency and training speed of LSTM-based RNNs.
CONTRIBUTIONS When training LSTMs, many of the intermediate gradients are close to zero due to the flat shape of the tanh and sigmoid nonlinearities far from the origin. This paper shows that rounding these small gradients to zero results in matrices with up to 80% sparsity during training, and that training character-level LSTM language models with this sparsification does not significantly change the final performance of the model. The authors argue that this sparsity could be exploited with specialized hardware to improve the energy efficiency and speed of recurrent network training. NOVELTY Thresholding gradients to induce sparsity and improve efficiency in RNN training is a novel result to my knowledge. MISSING CITATIONS Prior work has explored low-precision arithmetic for recurrent neural network language models: Hubara et al, “Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations”, https://arxiv.org/abs/1609.07061 Ott et al, “Recurrent Neural Networks With Limited Numerical Precision”, https://arxiv.org/abs/1608.06902 Low-precision arithmetic for recurrent networks promises to improve both training and inference efficiency. How does the proposed sparsification method compare with low-precision arithmetic? Are the ideas complementary? EXPERIMENTS The main experimental result of the paper (Section 4.1) is that training LSTM language models with sparse gradients does not affect convergence or final performance (Figure 5). This result is promising, but I do not think that this single experiment is enough to prove the utility of the proposed method. I also have some problems with this experiment. Plotting validation loss for character-level language modeling is not a standard way to report results; it is much more typical to report bits-per-character or perplexity on a held-out test set. These experiments also lack sufficient details for replication. What optimization algorithm, learning rate, and regularization were used? How were these hyperparameters chosen? Is the “truncated Wikipedia dataset” used for training the standard text8 dataset? In addition, the experiments do not compare with existing published results on this dataset. In the OpenReview discussion, the authors remarked that the “The final validation loss for the sparsified model is [...] almost the same as the baseline.” Comparing validation loss at the end of training is not the proper way to compare models. From Figure 5, it is clear that all models achieve minimal validation loss after around 10k iterations, after which the validation losses increase, suggesting that the models have slightly overfit the training data by the end of training. In Section 4.2 the authors claim to obtain similar experimental results with other network architectures, on other datasets (tiny-shakespeare and War and Peace), and for other tasks (image captioning and machine translation). However, the details and results of these experiments are not included in the paper, making it difficult to assess the utility of the proposed method and the significance of the results. ENERGY EFFICIENCY AND TRAINING SPEED One of the main claims of the paper is that sparse gradients can be exploited in hardware to reduce the training speed and improve the energy efficiency of recurrent network training, but these benefits are neither quantified nor demonstrated experimentally. Even without actually implementing custom hardware, would it be possible to estimate the expected improvements in efficiency through simulation or other means? Such results would significantly strengthen the paper. GRADIENT TRUNCATION AS REGULARIZATION In Figure 5 all models appear to reach a minimum validation loss at around 10k iterations and then overfit; at this point the Low model achieves even lower loss than the baseline. This is an interesting experimental result, but it is not discussed in the paper. Perhaps a low truncation threshold acts as a weak regularizer to prevent overfitting? Is this a general phenomenon of training recurrent networks with sparse gradients, or is it just a quirk of this particular experiment? This idea deserves more investigation, and could strengthen the paper. SUMMARY The core idea of the paper (thresholding gradients to induce sparsity and improve efficiency of RNN training) is interesting and practically useful, if a bit incremental. Nevertheless with thorough and deliberate experiments quantifying the tradeoffs between task performance, training speed, and energy efficiency across a variety of tasks and datasets, this simple idea could be the core of a strong paper. Unfortunately, as written the paper provides neither theoretical arguments nor convincing experimental results to justify the proposed method, and as such I do not believe the paper is ready for publication in its current form. PROS - The proposed method is simple, and seems to be a promising direction for improving the speed of training recurrent networks. CONS - No discussion of prior work on low-precision recurrent networks - Experimental results are not sufficient to validate the method - Many experimental details are missing - Results of key experiments (image captioning and machine translation) are missing
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining Long Short-Term Memory With Sparsified Stochastic Gradient Descent\nPrior work has demonstrated that exploiting the sparsity can dramatically improve the energy efficiency and shrink the memory footprint of Convolutional Neural Networks (CNNs). However, these sparsity-centric optimization techniques might be less effective for Long Short-Term Memory (LSTM) based Recurrent Neural Networks (RNNs), especially for the training phase, because of the significant structural difference between the neurons. To investigate if there is possible sparsity-centric optimization for training LSTM-based RNNs, we studied several applications and observed that there is potential sparsity in the gradients generated in the backward propagation. In this paper, we illustrate why the sparsity exists and propose a simple yet effective thresholding technique to induce further more sparsity during training an LSTM-based RNN training. Experiment results show that the proposed technique can increase the sparsity of linear gate gradients to higher than 80\\% without loss of performance, which makes more than 50\\% multiply-accumulate (MAC) operations redundant in an entire LSTM training process. These redudant MAC operations can be eliminated by hardware techniques to improve the energy efficiency and training speed of LSTM-based RNNs.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nCONTRIBUTIONS When training LSTMs, many of the intermediate gradients are close to zero due to the flat shape of the tanh and sigmoid nonlinearities far from the origin. This paper shows that rounding these small gradients to zero results in matrices with up to 80% sparsity during training, and that training character-level LSTM language models with this sparsification does not significantly change the final performance of the model. The authors argue that this sparsity could be exploited with specialized hardware to improve the energy efficiency and speed of recurrent network training. NOVELTY Thresholding gradients to induce sparsity and improve efficiency in RNN training is a novel result to my knowledge. MISSING CITATIONS Prior work has explored low-precision arithmetic for recurrent neural network language models: Hubara et al, “Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations”, https://arxiv.org/abs/1609.07061 Ott et al, “Recurrent Neural Networks With Limited Numerical Precision”, https://arxiv.org/abs/1608.06902 Low-precision arithmetic for recurrent networks promises to improve both training and inference efficiency. How does the proposed sparsification method compare with low-precision arithmetic? Are the ideas complementary? EXPERIMENTS The main experimental result of the paper (Section 4.1) is that training LSTM language models with sparse gradients does not affect convergence or final performance (Figure 5). This result is promising, but I do not think that this single experiment is enough to prove the utility of the proposed method. I also have some problems with this experiment. Plotting validation loss for character-level language modeling is not a standard way to report results; it is much more typical to report bits-per-character or perplexity on a held-out test set. These experiments also lack sufficient details for replication. What optimization algorithm, learning rate, and regularization were used? How were these hyperparameters chosen? Is the “truncated Wikipedia dataset” used for training the standard text8 dataset? In addition, the experiments do not compare with existing published results on this dataset. In the OpenReview discussion, the authors remarked that the “The final validation loss for the sparsified model is [...] almost the same as the baseline.” Comparing validation loss at the end of training is not the proper way to compare models. From Figure 5, it is clear that all models achieve minimal validation loss after around 10k iterations, after which the validation losses increase, suggesting that the models have slightly overfit the training data by the end of training. In Section 4.2 the authors claim to obtain similar experimental results with other network architectures, on other datasets (tiny-shakespeare and War and Peace), and for other tasks (image captioning and machine translation). However, the details and results of these experiments are not included in the paper, making it difficult to assess the utility of the proposed method and the significance of the results. ENERGY EFFICIENCY AND TRAINING SPEED One of the main claims of the paper is that sparse gradients can be exploited in hardware to reduce the training speed and improve the energy efficiency of recurrent network training, but these benefits are neither quantified nor demonstrated experimentally. Even without actually implementing custom hardware, would it be possible to estimate the expected improvements in efficiency through simulation or other means? Such results would significantly strengthen the paper. GRADIENT TRUNCATION AS REGULARIZATION In Figure 5 all models appear to reach a minimum validation loss at around 10k iterations and then overfit; at this point the Low model achieves even lower loss than the baseline. This is an interesting experimental result, but it is not discussed in the paper. Perhaps a low truncation threshold acts as a weak regularizer to prevent overfitting? Is this a general phenomenon of training recurrent networks with sparse gradients, or is it just a quirk of this particular experiment? This idea deserves more investigation, and could strengthen the paper. SUMMARY The core idea of the paper (thresholding gradients to induce sparsity and improve efficiency of RNN training) is interesting and practically useful, if a bit incremental. Nevertheless with thorough and deliberate experiments quantifying the tradeoffs between task performance, training speed, and energy efficiency across a variety of tasks and datasets, this simple idea could be the core of a strong paper. Unfortunately, as written the paper provides neither theoretical arguments nor convincing experimental results to justify the proposed method, and as such I do not believe the paper is ready for publication in its current form. PROS - The proposed method is simple, and seems to be a promising direction for improving the speed of training recurrent networks. CONS - No discussion of prior work on low-precision recurrent networks - Experimental results are not sufficient to validate the method - Many experimental details are missing - Results of key experiments (image captioning and machine translation) are missing", "role": "assistant" } ]
[ "CONTRIBUTIONS When training LSTMs, many of the intermediate gradients are close to zero due to the flat shape of the tanh and sigmoid nonlinearities far from the origin.", "This paper shows that rounding these small gradients to zero results in matrices with up to 80% sparsity during training, and that training character-level LSTM language models with this sparsification does not significantly change the final performance of the model.", "The authors argue that this sparsity could be exploited with specialized hardware to improve the energy efficiency and speed of recurrent network training.", "NOVELTY Thresholding gradients to induce sparsity and improve efficiency in RNN training is a novel result to my knowledge.", "MISSING CITATIONS Prior work has explored low-precision arithmetic for recurrent neural network language models: Hubara et al, “Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations”, https://arxiv.org/abs/1609.07061 Ott et al, “Recurrent Neural Networks With Limited Numerical Precision”, https://arxiv.org/abs/1608.06902 Low-precision arithmetic for recurrent networks promises to improve both training and inference efficiency.", "How does the proposed sparsification method compare with low-precision arithmetic?", "Are the ideas complementary?", "EXPERIMENTS The main experimental result of the paper (Section 4.1) is that training LSTM language models with sparse gradients does not affect convergence or final performance (Figure 5).", "This result is promising, but I do not think that this single experiment is enough to prove the utility of the proposed method.", "I also have some problems with this experiment.", "Plotting validation loss for character-level language modeling is not a standard way to report results; it is much more typical to report bits-per-character or perplexity on a held-out test set.", "These experiments also lack sufficient details for replication.", "What optimization algorithm, learning rate, and regularization were used?", "How were these hyperparameters chosen?", "Is the “truncated Wikipedia dataset” used for training the standard text8 dataset?", "In addition, the experiments do not compare with existing published results on this dataset.", "In the OpenReview discussion, the authors remarked that the “The final validation loss for the sparsified model is [...] almost the same as the baseline.” Comparing validation loss at the end of training is not the proper way to compare models.", "From Figure 5, it is clear that all models achieve minimal validation loss after around 10k iterations, after which the validation losses increase, suggesting that the models have slightly overfit the training data by the end of training.", "In Section 4.2 the authors claim to obtain similar experimental results with other network architectures, on other datasets (tiny-shakespeare and War and Peace), and for other tasks (image captioning and machine translation).", "However, the details and results of these experiments are not included in the paper, making it difficult to assess the utility of the proposed method and the significance of the results.", "ENERGY EFFICIENCY AND TRAINING SPEED One of the main claims of the paper is that sparse gradients can be exploited in hardware to reduce the training speed and improve the energy efficiency of recurrent network training, but these benefits are neither quantified nor demonstrated experimentally.", "Even without actually implementing custom hardware, would it be possible to estimate the expected improvements in efficiency through simulation or other means?", "Such results would significantly strengthen the paper.", "GRADIENT TRUNCATION AS REGULARIZATION In Figure 5 all models appear to reach a minimum validation loss at around 10k iterations and then overfit; at this point the Low model achieves even lower loss than the baseline.", "This is an interesting experimental result, but it is not discussed in the paper.", "Perhaps a low truncation threshold acts as a weak regularizer to prevent overfitting?", "Is this a general phenomenon of training recurrent networks with sparse gradients, or is it just a quirk of this particular experiment?", "This idea deserves more investigation, and could strengthen the paper.", "SUMMARY The core idea of the paper (thresholding gradients to induce sparsity and improve efficiency of RNN training) is interesting and practically useful, if a bit incremental.", "Nevertheless with thorough and deliberate experiments quantifying the tradeoffs between task performance, training speed, and energy efficiency across a variety of tasks and datasets, this simple idea could be the core of a strong paper.", "Unfortunately, as written the paper provides neither theoretical arguments nor convincing experimental results to justify the proposed method, and as such I do not believe the paper is ready for publication in its current form.", "PROS - The proposed method is simple, and seems to be a promising direction for improving the speed of training recurrent networks.", "CONS - No discussion of prior work on low-precision recurrent networks - Experimental results are not sufficient to validate the method - Many experimental details are missing - Results of key experiments (image captioning and machine translation) are missing" ]
{ "criticism": 11, "example": 3, "importance_and_relevance": 7, "materials_and_methods": 30, "praise": 7, "presentation_and_reporting": 2, "results_and_discussion": 16, "suggestion_and_solution": 2, "total": 33 }
0.333333
0.090909
0.212121
0.909091
0.212121
0.060606
0.484848
0.060606
33
1
1
1
1
1
1
1
1
2.363636
-3.949336
6.312972
Efficient Calculation of Polynomial Features on Sparse Matrices
We provide an algorithm for polynomial feature expansion that both operates on and produces a compressed sparse row matrix without any densification. For a vector of dimension D, density d, and degree k the algorithm has time complexity O(d^k * D^k) where k is the polynomial-feature order; this is an improvement by a factor d^k over the standard method.
The authors present here a new algorithm for the effective calculation of polynomial features on Sparse Matrices. The key idea is to use a proper mapping between matrices and their polynomial versions, in order to derive an effective CSR expansion algorithm. The authors analyse the time complexity in a convincing way with experiments. Overall, the algorithm is definitely interesting, quite simple and nice, with many possible applications. The paper is however very superficial in terms of experiments, or applications of the proposed scheme. Most importantly, the fit with the main scope of ICLR is far from obvious with this work, that should probably re-submitted to better targets.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEfficient Calculation of Polynomial Features on Sparse Matrices\nWe provide an algorithm for polynomial feature expansion that both operates on and produces a compressed sparse row matrix without any densification. For a vector of dimension D, density d, and degree k the algorithm has time complexity O(d^k * D^k) where k is the polynomial-feature order; this is an improvement by a factor d^k over the standard method.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors present here a new algorithm for the effective calculation of polynomial features on Sparse Matrices. The key idea is to use a proper mapping between matrices and their polynomial versions, in order to derive an effective CSR expansion algorithm. The authors analyse the time complexity in a convincing way with experiments. Overall, the algorithm is definitely interesting, quite simple and nice, with many possible applications. The paper is however very superficial in terms of experiments, or applications of the proposed scheme. Most importantly, the fit with the main scope of ICLR is far from obvious with this work, that should probably re-submitted to better targets.", "role": "assistant" } ]
[ "The authors present here a new algorithm for the effective calculation of polynomial features on Sparse Matrices.", "The key idea is to use a proper mapping between matrices and their polynomial versions, in order to derive an effective CSR expansion algorithm.", "The authors analyse the time complexity in a convincing way with experiments.", "Overall, the algorithm is definitely interesting, quite simple and nice, with many possible applications.", "The paper is however very superficial in terms of experiments, or applications of the proposed scheme.", "Most importantly, the fit with the main scope of ICLR is far from obvious with this work, that should probably re-submitted to better targets." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 0, "suggestion_and_solution": 1, "total": 6 }
0.333333
0
0.333333
1
0.333333
0
0
0.166667
6
1
0
1
1
1
0
0
1
2.166667
1.393328
0.773339
Efficient Calculation of Polynomial Features on Sparse Matrices
We provide an algorithm for polynomial feature expansion that both operates on and produces a compressed sparse row matrix without any densification. For a vector of dimension D, density d, and degree k the algorithm has time complexity O(d^k * D^k) where k is the polynomial-feature order; this is an improvement by a factor d^k over the standard method.
The paper is beyond my expertise. I cannot give any solid review comments regarding the techniques that are better than an educated guess. However, it seems to me that the topic is not very relevant to the focus of ICLR. Also the quality of writing requires improvement, especially literature review and experiment analysis.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nEfficient Calculation of Polynomial Features on Sparse Matrices\nWe provide an algorithm for polynomial feature expansion that both operates on and produces a compressed sparse row matrix without any densification. For a vector of dimension D, density d, and degree k the algorithm has time complexity O(d^k * D^k) where k is the polynomial-feature order; this is an improvement by a factor d^k over the standard method.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper is beyond my expertise. I cannot give any solid review comments regarding the techniques that are better than an educated guess. However, it seems to me that the topic is not very relevant to the focus of ICLR. Also the quality of writing requires improvement, especially literature review and experiment analysis.", "role": "assistant" } ]
[ "The paper is beyond my expertise.", "I cannot give any solid review comments regarding the techniques that are better than an educated guess.", "However, it seems to me that the topic is not very relevant to the focus of ICLR.", "Also the quality of writing requires improvement, especially literature review and experiment analysis." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 2, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 0, "suggestion_and_solution": 1, "total": 4 }
0.75
0
0.5
0.5
0.25
0.25
0
0.25
4
1
0
1
1
1
1
0
1
2.5
1.221623
1.278377
Universality in halting time
The authors present empirical distributions for the halting time (measured by the number of iterations to reach a given accuracy) of optimization algorithms applied to two random systems: spin glasses and deep learning. Given an algorithm, which we take to be both the optimization routine and the form of the random landscape, the fluctuations of the halting time follow a distribution that remains unchanged even when the input is changed drastically. We observe two main classes, a Gumbel-like distribution that appears in Google searches, human decision times, QR factorization and spin glasses, and a Gaussian-like distribution that appears in conjugate gradient method, deep network with MNIST input data and deep network with random input data. This empirical evidence suggests presence of a class of distributions for which the halting time is independent of the underlying distribution under some conditions.
The authors explore whether the halting time distributions for various algorithms in various settings exhibit *universality*, i.e. after rescaling to zero mean and unit variance, the distribution does not depend on stopping parameter, dimensionality and ensemble. The idea of the described universality is very interesting. However I see several shortcomings in the paper: In order to be of practical relevance, the actual stopping time might be more relevant than the scaled one. The discussion of exponential tailed halting time distributions is a good start, but I am not sure how often this might be actually helpful. Still, the findings in the paper might be interesting from a theoretical point of view. Especially for ICLR, I think it would have been more interesting to look into comparisons between stochastic gradient descent, momentum, ADAM etc on different deep learning architectures. Over which of those parameters does universality hold?. How can different initializations influence the halting time distribution? I would expect a sensible initialization to cut of part of the right tail of the distribution. Additionally, I found the paper quite hard to read. Here are some clarity issues: - abstract: *even when the input is changed drastically*: From the abstract I*m not sure what *input* refers to, here - I. Introduction: *where the stopping condition is, essentially, the time to find the minimum*: this doesn*t seem to make sense, a condition is not a time. I guess the authors wanted to say that the stopping condition is that the minimum has been reached? - I.1 the notions of dimension N, epsilon and ensemble E are introduced without any clarification what they are. From the later parts of the paper I got some ideas and examples, but here it is very hard to understand what these parameters should be (just some examples would be already helpful) - I.3 *We use x^ell for ell in Z={1, dots, S} where Z is a random sample from of training samples* This formulation doesn*t make sense. Either Z is a random sample, or Z={1, ..., S}. - II.1 it took me a long time to find the meaning of M. As this parameter seems to be crucial for universality in this case, it would be very helpful to point out more explicitly what it refers to.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUniversality in halting time\nThe authors present empirical distributions for the halting time (measured by the number of iterations to reach a given accuracy) of optimization algorithms applied to two random systems: spin glasses and deep learning. Given an algorithm, which we take to be both the optimization routine and the form of the random landscape, the fluctuations of the halting time follow a distribution that remains unchanged even when the input is changed drastically. We observe two main classes, a Gumbel-like distribution that appears in Google searches, human decision times, QR factorization and spin glasses, and a Gaussian-like distribution that appears in conjugate gradient method, deep network with MNIST input data and deep network with random input data. This empirical evidence suggests presence of a class of distributions for which the halting time is independent of the underlying distribution under some conditions.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors explore whether the halting time distributions for various algorithms in various settings exhibit *universality*, i.e. after rescaling to zero mean and unit variance, the distribution does not depend on stopping parameter, dimensionality and ensemble. The idea of the described universality is very interesting. However I see several shortcomings in the paper: In order to be of practical relevance, the actual stopping time might be more relevant than the scaled one. The discussion of exponential tailed halting time distributions is a good start, but I am not sure how often this might be actually helpful. Still, the findings in the paper might be interesting from a theoretical point of view. Especially for ICLR, I think it would have been more interesting to look into comparisons between stochastic gradient descent, momentum, ADAM etc on different deep learning architectures. Over which of those parameters does universality hold?. How can different initializations influence the halting time distribution? I would expect a sensible initialization to cut of part of the right tail of the distribution. Additionally, I found the paper quite hard to read. Here are some clarity issues: - abstract: *even when the input is changed drastically*: From the abstract I*m not sure what *input* refers to, here - I. Introduction: *where the stopping condition is, essentially, the time to find the minimum*: this doesn*t seem to make sense, a condition is not a time. I guess the authors wanted to say that the stopping condition is that the minimum has been reached? - I.1 the notions of dimension N, epsilon and ensemble E are introduced without any clarification what they are. From the later parts of the paper I got some ideas and examples, but here it is very hard to understand what these parameters should be (just some examples would be already helpful) - I.3 *We use x^ell for ell in Z={1, dots, S} where Z is a random sample from of training samples* This formulation doesn*t make sense. Either Z is a random sample, or Z={1, ..., S}. - II.1 it took me a long time to find the meaning of M. As this parameter seems to be crucial for universality in this case, it would be very helpful to point out more explicitly what it refers to.", "role": "assistant" } ]
[ "The authors explore whether the halting time distributions for various algorithms in various settings exhibit *universality*, i.e.", "after rescaling to zero mean and unit variance, the distribution does not depend on stopping parameter, dimensionality and ensemble.", "The idea of the described universality is very interesting.", "However I see several shortcomings in the paper: In order to be of practical relevance, the actual stopping time might be more relevant than the scaled one.", "The discussion of exponential tailed halting time distributions is a good start, but I am not sure how often this might be actually helpful.", "Still, the findings in the paper might be interesting from a theoretical point of view.", "Especially for ICLR, I think it would have been more interesting to look into comparisons between stochastic gradient descent, momentum, ADAM etc on different deep learning architectures.", "Over which of those parameters does universality hold?.", "How can different initializations influence the halting time distribution?", "I would expect a sensible initialization to cut of part of the right tail of the distribution.", "Additionally, I found the paper quite hard to read.", "Here are some clarity issues: - abstract: *even when the input is changed drastically*: From the abstract I*m not sure what *input* refers to, here - I.", "Introduction: *where the stopping condition is, essentially, the time to find the minimum*: this doesn*t seem to make sense, a condition is not a time.", "I guess the authors wanted to say that the stopping condition is that the minimum has been reached?", "- I.1 the notions of dimension N, epsilon and ensemble E are introduced without any clarification what they are.", "From the later parts of the paper I got some ideas and examples, but here it is very hard to understand what these parameters should be (just some examples would be already helpful) - I.3 *We use x^ell for ell in Z={1, dots, S} where Z is a random sample from of training samples* This formulation doesn*t make sense.", "Either Z is a random sample, or Z={1, ..., S}.", "- II.1 it took me a long time to find the meaning of M. As this parameter seems to be crucial for universality in this case, it would be very helpful to point out more explicitly what it refers to." ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 11, "praise": 3, "presentation_and_reporting": 5, "results_and_discussion": 4, "suggestion_and_solution": 5, "total": 18 }
0.388889
0
0.222222
0.611111
0.166667
0.277778
0.222222
0.277778
18
1
0
1
1
1
1
1
1
2.166667
1.772106
0.394561
Parallel Stochastic Gradient Descent with Sound Combiners
Stochastic gradient descent (SGD) is a well-known method for regression and classification tasks. However, it is an inherently sequential algorithm — at each step, the processing of the current example depends on the parameters learned from the previous examples. Prior approaches to parallelizing SGD, such as Hogwild! and AllReduce, do not honor these dependences across threads and thus can potentially suffer poor convergence rates and/or poor scalability. This paper proposes SymSGD, a parallel SGD algorithm that retains the sequential semantics of SGD in expectation. Each thread in this approach learns a local model and a probabilistic model combiner that allows the local models to be combined to produce the same result as what a sequential SGD would have produced, in expectation. This SymSGD approach is applicable to any linear learner whose update rule is linear. This paper evaluates SymSGD’s accuracy and performance on 9 datasets on a shared-memory machine shows up-to 13× speedup over our heavily optimized sequential baseline on 16 cores.
This paper describes a correction technique to combine updates from multiple SGD to make it statistically equivalent to sequential technique. Comments 1) The proposed method is novel and interesting to allow update to be corrected even when the update is delayed. 2) The proposed theory can only be applied to square loss setting (with linear update rule), making it somewhat limited. This paper would be much more interesting to ICLR community, if the technique is applicable to general objective function and settings of deep neural networks. 3) The resulting technique requires book-keeping of a dimensional reduced combiner matrix, which causes more computation in terms of complexity. The authors argue that the overhead can be canceled with SIMD support for symbolic update. However, the normal update of SGD might also benefit from SIMD, especially when the dataset is dense. Overall, even though the practical value of this work is limited by 2) and 3), the technique(specifically the correction rule) proposed in the paper could be of interest to people scaling up learning. I would encourage the author to extend the method to the cases of non-linear objective function which could make it more interesting to the ICLR community
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nParallel Stochastic Gradient Descent with Sound Combiners\nStochastic gradient descent (SGD) is a well-known method for regression and classification tasks. However, it is an inherently sequential algorithm — at each step, the processing of the current example depends on the parameters learned from the previous examples. Prior approaches to parallelizing SGD, such as Hogwild! and AllReduce, do not honor these dependences across threads and thus can potentially suffer poor convergence rates and/or poor scalability. This paper proposes SymSGD, a parallel SGD algorithm that retains the sequential semantics of SGD in expectation. Each thread in this approach learns a local model and a probabilistic model combiner that allows the local models to be combined to produce the same result as what a sequential SGD would have produced, in expectation. This SymSGD approach is applicable to any linear learner whose update rule is linear. This paper evaluates SymSGD’s accuracy and performance on 9 datasets on a shared-memory machine shows up-to 13× speedup over our heavily optimized sequential baseline on 16 cores.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper describes a correction technique to combine updates from multiple SGD to make it statistically equivalent to sequential technique. Comments 1) The proposed method is novel and interesting to allow update to be corrected even when the update is delayed. 2) The proposed theory can only be applied to square loss setting (with linear update rule), making it somewhat limited. This paper would be much more interesting to ICLR community, if the technique is applicable to general objective function and settings of deep neural networks. 3) The resulting technique requires book-keeping of a dimensional reduced combiner matrix, which causes more computation in terms of complexity. The authors argue that the overhead can be canceled with SIMD support for symbolic update. However, the normal update of SGD might also benefit from SIMD, especially when the dataset is dense. Overall, even though the practical value of this work is limited by 2) and 3), the technique(specifically the correction rule) proposed in the paper could be of interest to people scaling up learning. I would encourage the author to extend the method to the cases of non-linear objective function which could make it more interesting to the ICLR community", "role": "assistant" } ]
[ "This paper describes a correction technique to combine updates from multiple SGD to make it statistically equivalent to sequential technique.", "Comments 1) The proposed method is novel and interesting to allow update to be corrected even when the update is delayed.", "2) The proposed theory can only be applied to square loss setting (with linear update rule), making it somewhat limited.", "This paper would be much more interesting to ICLR community, if the technique is applicable to general objective function and settings of deep neural networks.", "3) The resulting technique requires book-keeping of a dimensional reduced combiner matrix, which causes more computation in terms of complexity.", "The authors argue that the overhead can be canceled with SIMD support for symbolic update.", "However, the normal update of SGD might also benefit from SIMD, especially when the dataset is dense.", "Overall, even though the practical value of this work is limited by 2) and 3), the technique(specifically the correction rule) proposed in the paper could be of interest to people scaling up learning.", "I would encourage the author to extend the method to the cases of non-linear objective function which could make it more interesting to the ICLR community" ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 8, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 2, "suggestion_and_solution": 4, "total": 9 }
0
0
0.444444
0.888889
0.222222
0
0.222222
0.444444
9
0
0
1
1
1
0
1
1
2.222222
1.969703
0.252519
Divide and Conquer with Neural Networks
We consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we exploit with neural architectures that are defined recursively, by learning two scale-invariant atomic operators: how to split a given input into two disjoint sets, and how to merge two partially solved tasks into a larger partial solution. The scale invariance creates parameter sharing across all stages of the architecture, and the dynamic design creates architectures whose complexity can be tuned in a differentiable manner. As a result, our model is trained by backpropagation not only to minimize the errors at the output, but also to do so as efficiently as possible, by enforcing shallower computation graphs. Moreover, thanks to the scale invariance, the model can be trained only with only input/output pairs, removing the need to know oracle intermediate split and merge decisions. As it turns out, accuracy and complexity are not independent qualities, and we verify empirically that when the learnt complexity matches the underlying complexity of the task, this results in higher accuracy and better generalization in two paradigmatic problems: sorting and finding planar convex hulls.
The basic idea of this contribution is very nice and worth pursuing: how to use the powerful “divide and conquer” algorithm design strategy to learn better programs for tasks such as sorting or planar convex hull. However, the execution of this idea is not convincing and needs polishing before acceptance. As it is right now, the paper has a proof-of-concept feel that makes it great for a workshop contribution. My main concern is that the method presented is currently not easily applicable to other tasks. Typically, demonstrations of program induction from input-output examples on well known tasks serves the purpose of proving, that a generic learning machine is able to solve some well known tasks, and will be useful on other tasks due to its generality. This contribution, however, presents a learning machine that is very hand-tailored to the two chosen tasks. The paper essentially demonstrates that with enough engineering (hardcoding the recurrency structure, designing problem-specific rules of supervision at lower recurrency levels) one can get a partially trainable sorter or convex hull solver. I found the contribution relatively hard to understand. High level ideas are mixed with low-level tricks required to get the model to work and it is not clear either how the models operate, nor how much of them was actually learned, and how much was designed. The answer to the questions did hep, nut didn*t make it into the paper. Mixing the descriptions of the tricks required to solve the two tasks makes things even more confusing. I believe that the paper would be much more accessible if instead of promising a general solution it clearly stated the challenges faced by the authors and the possible solutions. Highlights: + Proof-of-concept of a partially-trainable implementation of the important “divide and conquer” paradigm ++ Explicit reasoning about complexity of induced programs - The solution isn’t generic enough to be applicable to unknown problems - the networks require tricks specific to each problem - The writing style pictures the method as very general, but falls back on very low level details specific to each task
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDivide and Conquer with Neural Networks\nWe consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we exploit with neural architectures that are defined recursively, by learning two scale-invariant atomic operators: how to split a given input into two disjoint sets, and how to merge two partially solved tasks into a larger partial solution. The scale invariance creates parameter sharing across all stages of the architecture, and the dynamic design creates architectures whose complexity can be tuned in a differentiable manner. As a result, our model is trained by backpropagation not only to minimize the errors at the output, but also to do so as efficiently as possible, by enforcing shallower computation graphs. Moreover, thanks to the scale invariance, the model can be trained only with only input/output pairs, removing the need to know oracle intermediate split and merge decisions. As it turns out, accuracy and complexity are not independent qualities, and we verify empirically that when the learnt complexity matches the underlying complexity of the task, this results in higher accuracy and better generalization in two paradigmatic problems: sorting and finding planar convex hulls.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe basic idea of this contribution is very nice and worth pursuing: how to use the powerful “divide and conquer” algorithm design strategy to learn better programs for tasks such as sorting or planar convex hull. However, the execution of this idea is not convincing and needs polishing before acceptance. As it is right now, the paper has a proof-of-concept feel that makes it great for a workshop contribution. My main concern is that the method presented is currently not easily applicable to other tasks. Typically, demonstrations of program induction from input-output examples on well known tasks serves the purpose of proving, that a generic learning machine is able to solve some well known tasks, and will be useful on other tasks due to its generality. This contribution, however, presents a learning machine that is very hand-tailored to the two chosen tasks. The paper essentially demonstrates that with enough engineering (hardcoding the recurrency structure, designing problem-specific rules of supervision at lower recurrency levels) one can get a partially trainable sorter or convex hull solver. I found the contribution relatively hard to understand. High level ideas are mixed with low-level tricks required to get the model to work and it is not clear either how the models operate, nor how much of them was actually learned, and how much was designed. The answer to the questions did hep, nut didn*t make it into the paper. Mixing the descriptions of the tricks required to solve the two tasks makes things even more confusing. I believe that the paper would be much more accessible if instead of promising a general solution it clearly stated the challenges faced by the authors and the possible solutions. Highlights: + Proof-of-concept of a partially-trainable implementation of the important “divide and conquer” paradigm ++ Explicit reasoning about complexity of induced programs - The solution isn’t generic enough to be applicable to unknown problems - the networks require tricks specific to each problem - The writing style pictures the method as very general, but falls back on very low level details specific to each task", "role": "assistant" } ]
[ "The basic idea of this contribution is very nice and worth pursuing: how to use the powerful “divide and conquer” algorithm design strategy to learn better programs for tasks such as sorting or planar convex hull.", "However, the execution of this idea is not convincing and needs polishing before acceptance.", "As it is right now, the paper has a proof-of-concept feel that makes it great for a workshop contribution.", "My main concern is that the method presented is currently not easily applicable to other tasks.", "Typically, demonstrations of program induction from input-output examples on well known tasks serves the purpose of proving, that a generic learning machine is able to solve some well known tasks, and will be useful on other tasks due to its generality.", "This contribution, however, presents a learning machine that is very hand-tailored to the two chosen tasks.", "The paper essentially demonstrates that with enough engineering (hardcoding the recurrency structure, designing problem-specific rules of supervision at lower recurrency levels) one can get a partially trainable sorter or convex hull solver.", "I found the contribution relatively hard to understand.", "High level ideas are mixed with low-level tricks required to get the model to work and it is not clear either how the models operate, nor how much of them was actually learned, and how much was designed.", "The answer to the questions did hep, nut didn*t make it into the paper.", "Mixing the descriptions of the tricks required to solve the two tasks makes things even more confusing.", "I believe that the paper would be much more accessible if instead of promising a general solution it clearly stated the challenges faced by the authors and the possible solutions.", "Highlights: + Proof-of-concept of a partially-trainable implementation of the important “divide and conquer” paradigm ++ Explicit reasoning about complexity of induced programs - The solution isn’t generic enough to be applicable to unknown problems - the networks require tricks specific to each problem - The writing style pictures the method as very general, but falls back on very low level details specific to each task" ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 6, "materials_and_methods": 7, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 13 }
0.538462
0
0.461538
0.538462
0.230769
0.230769
0.076923
0.076923
13
1
0
1
1
1
1
1
1
2.153846
2.153846
0
Gradients of Counterfactuals
Gradients have been used to quantify feature importance in machine learning models. Unfortunately, in nonlinear deep networks, not only individual neurons but also the whole network can saturate, and as a result an important input feature can have a tiny gradient. We study various networks, and observe that this phenomena is indeed widespread, across many inputs. We propose to examine interior gradients, which are gradients of counterfactual inputs constructed by scaling down the original input. We apply our method to the GoogleNet architecture for object recognition in images, as well as a ligand-based virtual screening network with categorical features and an LSTM based language model for the Penn Treebank dataset. We visualize how interior gradients better capture feature importance. Furthermore, interior gradients are applicable to a wide variety of deep networks, and have the attribution property that the feature importance scores sum to the the prediction score. Best of all, interior gradients can be computed just as easily as gradients. In contrast, previous methods are complex to implement, which hinders practical adoption.
This work proposes to use visualization of gradients to further understand the importance of features (i.e. pixels) for visual classification. Overall, this presented visualizations are interesting, however, the approach is very ad hoc. The authors do not explain why visualizing regular gradients isn*t correlated with the importance of features relevant to the given visual category and proceed to the interior gradient approach. One particular question with regular gradients at features that form the spatial support of the visual class. Is it the case that the gradients of the features that are confident of the prediction remain low, while those with high uncertainty will have strong gradients? With regards to the interior gradients, it is unclear how the scaling parameter alpha affects the feature importance and how it is related to attention. Finally, does this model use batch normalization?
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGradients of Counterfactuals\nGradients have been used to quantify feature importance in machine learning models. Unfortunately, in nonlinear deep networks, not only individual neurons but also the whole network can saturate, and as a result an important input feature can have a tiny gradient. We study various networks, and observe that this phenomena is indeed widespread, across many inputs. We propose to examine interior gradients, which are gradients of counterfactual inputs constructed by scaling down the original input. We apply our method to the GoogleNet architecture for object recognition in images, as well as a ligand-based virtual screening network with categorical features and an LSTM based language model for the Penn Treebank dataset. We visualize how interior gradients better capture feature importance. Furthermore, interior gradients are applicable to a wide variety of deep networks, and have the attribution property that the feature importance scores sum to the the prediction score. Best of all, interior gradients can be computed just as easily as gradients. In contrast, previous methods are complex to implement, which hinders practical adoption.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis work proposes to use visualization of gradients to further understand the importance of features (i.e. pixels) for visual classification. Overall, this presented visualizations are interesting, however, the approach is very ad hoc. The authors do not explain why visualizing regular gradients isn*t correlated with the importance of features relevant to the given visual category and proceed to the interior gradient approach. One particular question with regular gradients at features that form the spatial support of the visual class. Is it the case that the gradients of the features that are confident of the prediction remain low, while those with high uncertainty will have strong gradients? With regards to the interior gradients, it is unclear how the scaling parameter alpha affects the feature importance and how it is related to attention. Finally, does this model use batch normalization?", "role": "assistant" } ]
[ "This work proposes to use visualization of gradients to further understand the importance of features (i.e.", "pixels) for visual classification.", "Overall, this presented visualizations are interesting, however, the approach is very ad hoc.", "The authors do not explain why visualizing regular gradients isn*t correlated with the importance of features relevant to the given visual category and proceed to the interior gradient approach.", "One particular question with regular gradients at features that form the spatial support of the visual class.", "Is it the case that the gradients of the features that are confident of the prediction remain low, while those with high uncertainty will have strong gradients?", "With regards to the interior gradients, it is unclear how the scaling parameter alpha affects the feature importance and how it is related to attention.", "Finally, does this model use batch normalization?" ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 8, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 8 }
0.375
0
0.375
1
0.125
0.125
0.375
0.125
8
1
0
1
1
1
1
1
1
2.5
2.105439
0.394561
Deep Convolutional Neural Network Design Patterns
Recent research in the deep learning field has produced a plethora of new architectures. At the same time, a growing number of groups are applying deep learning to new applications. Some of these groups are likely to be composed of inexperienced deep learning practitioners who are baffled by the dizzying array of architecture choices and therefore opt to use an older architecture (i.e., Alexnet). Here we attempt to bridge this gap by mining the collective knowledge contained in recent deep learning research to discover underlying principles for designing neural network architectures. In addition, we describe several architectural innovations, including Fractal of FractalNet network, Stagewise Boosting Networks, and Taylor Series Networks (our Caffe code and prototxt files are available at https://github.com/iPhysicist/CNNDesignPatterns). We hope others are inspired to build on our preliminary work.
The authors have grouped recent work in convolutional neural network design (specifically with respect to image classification) to identify core design principles guiding the field at large. The 14 principles they produce (along with associated references) include a number of useful and correct observations that would be an asset to anyone unfamiliar with the field. The authors explore a number of architectures on CIFAR-10 and CIFAR-100 guided by these principles. The authors have collected a quality set of references on the subject and grouped them well which is valuable for young researchers. Clearly the authors explored a many of architectural changes as part of their experiments and publicly available code base is always nice. Overall the writing seems to jump around a bit and the motivations behind some design principles feel lost in the confusion. For example, *Design Pattern 4: Increase Symmetry argues for architectural symmetry as a sign of beauty and quality* is presented as one of 14 core design principles without any further justification. Similarly *Design Pattern 6: Over-train includes any training method where the network is trained on a harder problem than necessary to improve generalization performance of inference* is presented in the middle of a paragraph with no supporting references or further explanation. The experimental portion of this paper feels scattered with many different approaches being presented based on subsets of the design principles. In general, these approaches either are minor modifications of existing networks (different FractalNet pooling strategies) or are novel architectures that do not perform well. The exception being the Fractal-of-Fractal network which achieves slightly improved accuracy but also introduces many more network parameters (increased capacity) over the original FractalNet. Preliminary rating: It is a useful and perhaps noble task to collect and distill research from many sources to find patterns (and perhaps gaps) in the state of a field; however, some of the patterns presented do not seem well developed and include principles that are poorly explained. Furthermore, the innovative architectures motivated by the design principles either fall short or achieve slightly better accuracy by introducing many more parameters (Fractal-Of-Fractal networks). For a paper addressing the topic of higher level design trends, I would appreciate additional rigorous experimentation around each principle rather than novel architectures being presented.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Convolutional Neural Network Design Patterns\nRecent research in the deep learning field has produced a plethora of new architectures. At the same time, a growing number of groups are applying deep learning to new applications. Some of these groups are likely to be composed of inexperienced deep learning practitioners who are baffled by the dizzying array of architecture choices and therefore opt to use an older architecture (i.e., Alexnet). Here we attempt to bridge this gap by mining the collective knowledge contained in recent deep learning research to discover underlying principles for designing neural network architectures. In addition, we describe several architectural innovations, including Fractal of FractalNet network, Stagewise Boosting Networks, and Taylor Series Networks (our Caffe code and prototxt files are available at https://github.com/iPhysicist/CNNDesignPatterns). We hope others are inspired to build on our preliminary work.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors have grouped recent work in convolutional neural network design (specifically with respect to image classification) to identify core design principles guiding the field at large. The 14 principles they produce (along with associated references) include a number of useful and correct observations that would be an asset to anyone unfamiliar with the field. The authors explore a number of architectures on CIFAR-10 and CIFAR-100 guided by these principles. The authors have collected a quality set of references on the subject and grouped them well which is valuable for young researchers. Clearly the authors explored a many of architectural changes as part of their experiments and publicly available code base is always nice. Overall the writing seems to jump around a bit and the motivations behind some design principles feel lost in the confusion. For example, *Design Pattern 4: Increase Symmetry argues for architectural symmetry as a sign of beauty and quality* is presented as one of 14 core design principles without any further justification. Similarly *Design Pattern 6: Over-train includes any training method where the network is trained on a harder problem than necessary to improve generalization performance of inference* is presented in the middle of a paragraph with no supporting references or further explanation. The experimental portion of this paper feels scattered with many different approaches being presented based on subsets of the design principles. In general, these approaches either are minor modifications of existing networks (different FractalNet pooling strategies) or are novel architectures that do not perform well. The exception being the Fractal-of-Fractal network which achieves slightly improved accuracy but also introduces many more network parameters (increased capacity) over the original FractalNet. Preliminary rating: It is a useful and perhaps noble task to collect and distill research from many sources to find patterns (and perhaps gaps) in the state of a field; however, some of the patterns presented do not seem well developed and include principles that are poorly explained. Furthermore, the innovative architectures motivated by the design principles either fall short or achieve slightly better accuracy by introducing many more parameters (Fractal-Of-Fractal networks). For a paper addressing the topic of higher level design trends, I would appreciate additional rigorous experimentation around each principle rather than novel architectures being presented.", "role": "assistant" } ]
[ "The authors have grouped recent work in convolutional neural network design (specifically with respect to image classification) to identify core design principles guiding the field at large.", "The 14 principles they produce (along with associated references) include a number of useful and correct observations that would be an asset to anyone unfamiliar with the field.", "The authors explore a number of architectures on CIFAR-10 and CIFAR-100 guided by these principles.", "The authors have collected a quality set of references on the subject and grouped them well which is valuable for young researchers.", "Clearly the authors explored a many of architectural changes as part of their experiments and publicly available code base is always nice.", "Overall the writing seems to jump around a bit and the motivations behind some design principles feel lost in the confusion.", "For example, *Design Pattern 4: Increase Symmetry argues for architectural symmetry as a sign of beauty and quality* is presented as one of 14 core design principles without any further justification.", "Similarly *Design Pattern 6: Over-train includes any training method where the network is trained on a harder problem than necessary to improve generalization performance of inference* is presented in the middle of a paragraph with no supporting references or further explanation.", "The experimental portion of this paper feels scattered with many different approaches being presented based on subsets of the design principles.", "In general, these approaches either are minor modifications of existing networks (different FractalNet pooling strategies) or are novel architectures that do not perform well.", "The exception being the Fractal-of-Fractal network which achieves slightly improved accuracy but also introduces many more network parameters (increased capacity) over the original FractalNet.", "Preliminary rating: It is a useful and perhaps noble task to collect and distill research from many sources to find patterns (and perhaps gaps) in the state of a field; however, some of the patterns presented do not seem well developed and include principles that are poorly explained.", "Furthermore, the innovative architectures motivated by the design principles either fall short or achieve slightly better accuracy by introducing many more parameters (Fractal-Of-Fractal networks).", "For a paper addressing the topic of higher level design trends, I would appreciate additional rigorous experimentation around each principle rather than novel architectures being presented." ]
{ "criticism": 4, "example": 2, "importance_and_relevance": 6, "materials_and_methods": 12, "praise": 6, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 14 }
0.285714
0.142857
0.428571
0.857143
0.428571
0.071429
0.071429
0.071429
14
1
1
1
1
1
1
1
1
2.357143
2.34136
0.015782
An Analysis of Feature Regularization for Low-shot Learning
Low-shot visual learning, the ability to recognize novel object categories from very few, or even one example, is a hallmark of human visual intelligence. Though successful on many tasks, deep learning approaches tends to be notoriously data-hungry. Recently, feature penalty regularization has been proved effective on capturing new concepts. In this work, we provide both empirical evidence and theoretical analysis on how and why these methods work. We also propose a better design of cost function with improved performance. Close scrutiny reveals the centering effect of feature representation, as well as the intrinsic connection with batch normalization. Extensive experiments on synthetic datasets, the one-shot learning benchmark “Omniglot”, and large-scale ImageNet validate our analysis.
This paper proposes analysis of regularization, weight Froebius-norm and feature L2 norm, showing that it is equivalent to another proposed regularization, gradient magnitude loss. They then argue that: 1) it is helpful to low-shot learning, 2) it is numerically stable, 3) it is a soft version of Batch Normalization. Finally, they demonstrate experimentally that such a regularization improves performance on low-shot tasks. First, this is a nice analysis of some simple models, and proposes interesting insights in some optimization issues. Unfortunately, the authors do not demonstrate, nor argue in a convincing manner, that such an analysis extends to deep non-linear computation structures. I feel like the authors could write a full paper about *results can be derived for ?(x) with convex differentiable non-linear activation functions such as ReLU*, both via analysis and experimentation to measure numerical stability. Second, the authors again show an interesting correspondance to batch normalization, but IMO fail to experimentally show its relevance. Finally, I understand the appeal of the proposed method from a numerical stability point of view, but am not convinced that it has any effect on low-shot learning in the high dimensional spaces that deep networks are used for. I commend the authors for contributing to the mathematical understanding of our field, but I think they have yet to demonstrate the large scale effectiveness of what they propose. At the same time, I feel like this paper does not have a clear and strong message. It makes various (interesting) claims about a number of things, but they seem more or less disparate, and only loosely related to low-shot learning. notes: - *an expectation taken with respect to the empirical distribution generated by the training set*, generally the training set is viewed as a *montecarlo* sample of the underlying, unknown data distribution mathcal{D}. - *we can see that our model learns meaningful representations*, it gets a 6.5% improvement on the baseline, but there is no analysis of the meaningfulness of the representations. - *Table 13.2* should be *Table 2*. - please be mindful of formatting, some citations should be parenthesized and there are numerous extraneous and missing spacings between words and sentences.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn Analysis of Feature Regularization for Low-shot Learning\nLow-shot visual learning, the ability to recognize novel object categories from very few, or even one example, is a hallmark of human visual intelligence. Though successful on many tasks, deep learning approaches tends to be notoriously data-hungry. Recently, feature penalty regularization has been proved effective on capturing new concepts. In this work, we provide both empirical evidence and theoretical analysis on how and why these methods work. We also propose a better design of cost function with improved performance. Close scrutiny reveals the centering effect of feature representation, as well as the intrinsic connection with batch normalization. Extensive experiments on synthetic datasets, the one-shot learning benchmark “Omniglot”, and large-scale ImageNet validate our analysis.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes analysis of regularization, weight Froebius-norm and feature L2 norm, showing that it is equivalent to another proposed regularization, gradient magnitude loss. They then argue that: 1) it is helpful to low-shot learning, 2) it is numerically stable, 3) it is a soft version of Batch Normalization. Finally, they demonstrate experimentally that such a regularization improves performance on low-shot tasks. First, this is a nice analysis of some simple models, and proposes interesting insights in some optimization issues. Unfortunately, the authors do not demonstrate, nor argue in a convincing manner, that such an analysis extends to deep non-linear computation structures. I feel like the authors could write a full paper about *results can be derived for ?(x) with convex differentiable non-linear activation functions such as ReLU*, both via analysis and experimentation to measure numerical stability. Second, the authors again show an interesting correspondance to batch normalization, but IMO fail to experimentally show its relevance. Finally, I understand the appeal of the proposed method from a numerical stability point of view, but am not convinced that it has any effect on low-shot learning in the high dimensional spaces that deep networks are used for. I commend the authors for contributing to the mathematical understanding of our field, but I think they have yet to demonstrate the large scale effectiveness of what they propose. At the same time, I feel like this paper does not have a clear and strong message. It makes various (interesting) claims about a number of things, but they seem more or less disparate, and only loosely related to low-shot learning. notes: - *an expectation taken with respect to the empirical distribution generated by the training set*, generally the training set is viewed as a *montecarlo* sample of the underlying, unknown data distribution mathcal{D}. - *we can see that our model learns meaningful representations*, it gets a 6.5% improvement on the baseline, but there is no analysis of the meaningfulness of the representations. - *Table 13.2* should be *Table 2*. - please be mindful of formatting, some citations should be parenthesized and there are numerous extraneous and missing spacings between words and sentences.", "role": "assistant" } ]
[ "This paper proposes analysis of regularization, weight Froebius-norm and feature L2 norm, showing that it is equivalent to another proposed regularization, gradient magnitude loss.", "They then argue that: 1) it is helpful to low-shot learning, 2) it is numerically stable, 3) it is a soft version of Batch Normalization.", "Finally, they demonstrate experimentally that such a regularization improves performance on low-shot tasks.", "First, this is a nice analysis of some simple models, and proposes interesting insights in some optimization issues.", "Unfortunately, the authors do not demonstrate, nor argue in a convincing manner, that such an analysis extends to deep non-linear computation structures.", "I feel like the authors could write a full paper about *results can be derived for ?", "(x) with convex differentiable non-linear activation functions such as ReLU*, both via analysis and experimentation to measure numerical stability.", "Second, the authors again show an interesting correspondance to batch normalization, but IMO fail to experimentally show its relevance.", "Finally, I understand the appeal of the proposed method from a numerical stability point of view, but am not convinced that it has any effect on low-shot learning in the high dimensional spaces that deep networks are used for.", "I commend the authors for contributing to the mathematical understanding of our field, but I think they have yet to demonstrate the large scale effectiveness of what they propose.", "At the same time, I feel like this paper does not have a clear and strong message.", "It makes various (interesting) claims about a number of things, but they seem more or less disparate, and only loosely related to low-shot learning.", "notes: - *an expectation taken with respect to the empirical distribution generated by the training set*, generally the training set is viewed as a *montecarlo* sample of the underlying, unknown data distribution mathcal{D}.", "- *we can see that our model learns meaningful representations*, it gets a 6.5% improvement on the baseline, but there is no analysis of the meaningfulness of the representations.", "- *Table 13.2* should be *Table 2*.", "- please be mindful of formatting, some citations should be parenthesized and there are numerous extraneous and missing spacings between words and sentences." ]
{ "criticism": 7, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 10, "praise": 3, "presentation_and_reporting": 4, "results_and_discussion": 6, "suggestion_and_solution": 3, "total": 16 }
0.4375
0
0.25
0.625
0.1875
0.25
0.375
0.1875
16
1
0
1
1
1
1
1
1
2.3125
2.170458
0.142042
Learning Python Code Suggestion with a Sparse Pointer Network
To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very long range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.
This paper takes a standard auto-regressive model of source code and augments it with a fixed attention policy that tracks the use of certain token types, like identifiers. Additionally they release a Python open source dataset. As expected this augmentation, the fixed attention policy, improves the perplexity of the model. It seems important to dig a bit deeper into these results and show the contribution of different token types to the achieve perplexity. This is alluded to in the text, but a more thorough comparison would be welcome. The idea of an attention policy that takes advantage of expert knowledge is a nice contribution, but perhaps if limited novelty --- for example the Maddison and Tarlow 2014 paper, which the authors cite, has scoping rules that track previously used identifiers in scope.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Python Code Suggestion with a Sparse Pointer Network\nTo enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very long range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper takes a standard auto-regressive model of source code and augments it with a fixed attention policy that tracks the use of certain token types, like identifiers. Additionally they release a Python open source dataset. As expected this augmentation, the fixed attention policy, improves the perplexity of the model. It seems important to dig a bit deeper into these results and show the contribution of different token types to the achieve perplexity. This is alluded to in the text, but a more thorough comparison would be welcome. The idea of an attention policy that takes advantage of expert knowledge is a nice contribution, but perhaps if limited novelty --- for example the Maddison and Tarlow 2014 paper, which the authors cite, has scoping rules that track previously used identifiers in scope.", "role": "assistant" } ]
[ "This paper takes a standard auto-regressive model of source code and augments it with a fixed attention policy that tracks the use of certain token types, like identifiers.", "Additionally they release a Python open source dataset.", "As expected this augmentation, the fixed attention policy, improves the perplexity of the model.", "It seems important to dig a bit deeper into these results and show the contribution of different token types to the achieve perplexity.", "This is alluded to in the text, but a more thorough comparison would be welcome.", "The idea of an attention policy that takes advantage of expert knowledge is a nice contribution, but perhaps if limited novelty --- for example the Maddison and Tarlow 2014 paper, which the authors cite, has scoping rules that track previously used identifiers in scope." ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 5, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 6 }
0
0
0.333333
0.833333
0.166667
0.333333
0.166667
0.333333
6
0
0
1
1
1
1
1
1
2.166667
1.393328
0.773339
Demystifying ResNet
We provide a theoretical explanation for the superb performance of ResNet via the study of deep linear networks and some nonlinear variants. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary point initially, from which the optimization algorithm is hard to escape. The 1-shortcut, however, is essentially equivalent to no shortcuts. Extensive experiments are provided accompanying our theoretical results. We show that initializing the network to small weights with 2-shortcuts achieves significantly better results than random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of deeper depth, from various perspectives ranging from final loss, learning dynamics and stability, to the behavior of the Hessian along the learning process.
I think the write-up can be improved. The results of the paper also might be somewhat misleading. The behavior for when weights are 0 is not revealing of how the model works in general. I think the work also underestimates the effect of the nonlinearities on the learning dynamics of the model.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDemystifying ResNet\nWe provide a theoretical explanation for the superb performance of ResNet via the study of deep linear networks and some nonlinear variants. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary point initially, from which the optimization algorithm is hard to escape. The 1-shortcut, however, is essentially equivalent to no shortcuts. Extensive experiments are provided accompanying our theoretical results. We show that initializing the network to small weights with 2-shortcuts achieves significantly better results than random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of deeper depth, from various perspectives ranging from final loss, learning dynamics and stability, to the behavior of the Hessian along the learning process.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nI think the write-up can be improved. The results of the paper also might be somewhat misleading. The behavior for when weights are 0 is not revealing of how the model works in general. I think the work also underestimates the effect of the nonlinearities on the learning dynamics of the model.", "role": "assistant" } ]
[ "I think the write-up can be improved.", "The results of the paper also might be somewhat misleading.", "The behavior for when weights are 0 is not revealing of how the model works in general.", "I think the work also underestimates the effect of the nonlinearities on the learning dynamics of the model." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 2, "praise": 0, "presentation_and_reporting": 1, "results_and_discussion": 2, "suggestion_and_solution": 2, "total": 4 }
0.5
0
0
0.5
0
0.25
0.5
0.5
4
1
0
0
1
0
1
1
1
2.25
0.971623
1.278377
Demystifying ResNet
We provide a theoretical explanation for the superb performance of ResNet via the study of deep linear networks and some nonlinear variants. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary point initially, from which the optimization algorithm is hard to escape. The 1-shortcut, however, is essentially equivalent to no shortcuts. Extensive experiments are provided accompanying our theoretical results. We show that initializing the network to small weights with 2-shortcuts achieves significantly better results than random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of deeper depth, from various perspectives ranging from final loss, learning dynamics and stability, to the behavior of the Hessian along the learning process.
ResNet and other architectures that use shortcuts have shown empirical success in several domains and therefore, studying the optimization for such architectures is very valuable. This paper is an attempt to address some of the properties of networks that use shortcuts. Some of the experiments in the paper are interesting. However, there are two main issues with the current paper: 1- linear vs non-linear: I think studying linear networks is valuable but we should be careful not to extend the results to networks with non-linear activations without enough evidence. This is especially true for Hessian as the Hessian of non-linear networks have very large condition number (see the ICLR submission *Singularity of Hessian in Deep Learning*) even in cases where the optimization is not challenging. Therefore, I don*t agree with the claims in the paper on non-linear networks. Moreover, one plot on MNIST is not enough to claim that non-linear networks behave similar to linear networks. 2- Hessian at zero initial point: The explanation of why we should be interested in Hessain at zero initial point is not acceptable. The zero initial point is not interesting because it is a very particular point that cannot tell us about the Hessian during optimization.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDemystifying ResNet\nWe provide a theoretical explanation for the superb performance of ResNet via the study of deep linear networks and some nonlinear variants. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary point initially, from which the optimization algorithm is hard to escape. The 1-shortcut, however, is essentially equivalent to no shortcuts. Extensive experiments are provided accompanying our theoretical results. We show that initializing the network to small weights with 2-shortcuts achieves significantly better results than random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of deeper depth, from various perspectives ranging from final loss, learning dynamics and stability, to the behavior of the Hessian along the learning process.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nResNet and other architectures that use shortcuts have shown empirical success in several domains and therefore, studying the optimization for such architectures is very valuable. This paper is an attempt to address some of the properties of networks that use shortcuts. Some of the experiments in the paper are interesting. However, there are two main issues with the current paper: 1- linear vs non-linear: I think studying linear networks is valuable but we should be careful not to extend the results to networks with non-linear activations without enough evidence. This is especially true for Hessian as the Hessian of non-linear networks have very large condition number (see the ICLR submission *Singularity of Hessian in Deep Learning*) even in cases where the optimization is not challenging. Therefore, I don*t agree with the claims in the paper on non-linear networks. Moreover, one plot on MNIST is not enough to claim that non-linear networks behave similar to linear networks. 2- Hessian at zero initial point: The explanation of why we should be interested in Hessain at zero initial point is not acceptable. The zero initial point is not interesting because it is a very particular point that cannot tell us about the Hessian during optimization.", "role": "assistant" } ]
[ "ResNet and other architectures that use shortcuts have shown empirical success in several domains and therefore, studying the optimization for such architectures is very valuable.", "This paper is an attempt to address some of the properties of networks that use shortcuts.", "Some of the experiments in the paper are interesting.", "However, there are two main issues with the current paper: 1- linear vs non-linear: I think studying linear networks is valuable but we should be careful not to extend the results to networks with non-linear activations without enough evidence.", "This is especially true for Hessian as the Hessian of non-linear networks have very large condition number (see the ICLR submission *Singularity of Hessian in Deep Learning*) even in cases where the optimization is not challenging.", "Therefore, I don*t agree with the claims in the paper on non-linear networks.", "Moreover, one plot on MNIST is not enough to claim that non-linear networks behave similar to linear networks.", "2- Hessian at zero initial point: The explanation of why we should be interested in Hessain at zero initial point is not acceptable.", "The zero initial point is not interesting because it is a very particular point that cannot tell us about the Hessian during optimization." ]
{ "criticism": 6, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 3, "praise": 3, "presentation_and_reporting": 1, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 9 }
0.666667
0
0.333333
0.333333
0.333333
0.111111
0.333333
0.111111
9
1
0
1
1
1
1
1
1
2.222222
1.969703
0.252519
Learning a Static Analyzer: A Case Study on a Toy Language
Static analyzers are meta-programs that analyze programs to detect potential errors or collect information. For example, they are used as security tools to detect potential buffer overflows. Also, they are used by compilers to verify that a program is well-formed and collect information to generate better code. In this paper, we address the following question: can a static analyzer be learned from data? More specifically, can we use deep learning to learn a static analyzer without the need for complicated feature engineering? We show that long short-term memory networks are able to learn a basic static analyzer for a simple toy language. However, pre-existing approaches based on feature engineering, hidden Markov models, or basic recurrent neural networks fail on such a simple problem. Finally, we show how to make such a tool usable by employing a language model to help the programmer detect where the reported errors are located.
The authors explore the idea of deep-learning a static analyzer. They do it with a toy programming language and a very simplified analysis problem -- just checking if all variables are initalized. While the idea is interesting and might be developped into a tool in the future, the toy task presented in this paper is too simple to warrant an ICLR submission. Just detecting whether a variable is initialized in a string is a toy algorihtmic task, similar to the ones solved in a number of paper in recent years by models such as the Neural Turing Machine, Stack RNNs, Neural GPU, or Differentiable Neural Computer. All these architectures perform almost perfectly on a number of algorithmic tasks, so it is highly probable that they would also solve this one. Unluckily, the authors only compare to much more basic models, such as HMMs. Since the code for many of the above-mentioned models is available online, a paper without these baselines is not ready for ICLR. Moreover, there is a risk that existing models already solve this problem very well, making the contribution unclear.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning a Static Analyzer: A Case Study on a Toy Language\nStatic analyzers are meta-programs that analyze programs to detect potential errors or collect information. For example, they are used as security tools to detect potential buffer overflows. Also, they are used by compilers to verify that a program is well-formed and collect information to generate better code. In this paper, we address the following question: can a static analyzer be learned from data? More specifically, can we use deep learning to learn a static analyzer without the need for complicated feature engineering? We show that long short-term memory networks are able to learn a basic static analyzer for a simple toy language. However, pre-existing approaches based on feature engineering, hidden Markov models, or basic recurrent neural networks fail on such a simple problem. Finally, we show how to make such a tool usable by employing a language model to help the programmer detect where the reported errors are located.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe authors explore the idea of deep-learning a static analyzer. They do it with a toy programming language and a very simplified analysis problem -- just checking if all variables are initalized. While the idea is interesting and might be developped into a tool in the future, the toy task presented in this paper is too simple to warrant an ICLR submission. Just detecting whether a variable is initialized in a string is a toy algorihtmic task, similar to the ones solved in a number of paper in recent years by models such as the Neural Turing Machine, Stack RNNs, Neural GPU, or Differentiable Neural Computer. All these architectures perform almost perfectly on a number of algorithmic tasks, so it is highly probable that they would also solve this one. Unluckily, the authors only compare to much more basic models, such as HMMs. Since the code for many of the above-mentioned models is available online, a paper without these baselines is not ready for ICLR. Moreover, there is a risk that existing models already solve this problem very well, making the contribution unclear.", "role": "assistant" } ]
[ "The authors explore the idea of deep-learning a static analyzer.", "They do it with a toy programming language and a very simplified analysis problem -- just checking if all variables are initalized.", "While the idea is interesting and might be developped into a tool in the future, the toy task presented in this paper is too simple to warrant an ICLR submission.", "Just detecting whether a variable is initialized in a string is a toy algorihtmic task, similar to the ones solved in a number of paper in recent years by models such as the Neural Turing Machine, Stack RNNs, Neural GPU, or Differentiable Neural Computer.", "All these architectures perform almost perfectly on a number of algorithmic tasks, so it is highly probable that they would also solve this one.", "Unluckily, the authors only compare to much more basic models, such as HMMs.", "Since the code for many of the above-mentioned models is available online, a paper without these baselines is not ready for ICLR.", "Moreover, there is a risk that existing models already solve this problem very well, making the contribution unclear." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 8, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 8 }
0.5
0
0.25
1
0.125
0.125
0.125
0
8
1
0
1
1
1
1
1
0
2.125
1.730439
0.394561
An Actor-critic Algorithm for Learning Rate Learning
Stochastic gradient descent (SGD), which updates the model parameters by adding a local gradient times a learning rate at each step, is widely used in model training of machine learning algorithms such as neural networks. It is observed that the models trained by SGD are sensitive to learning rates and good learning rates are problem specific. To avoid manually searching of learning rates, which is tedious and inefficient, we propose an algorithm to automatically learn learning rates using actor-critic methods from reinforcement learning (RL). In particular, we train a policy network called actor to decide the learning rate at each step during training, and a value network called critic to give feedback about quality of the decision (e.g., the goodness of the learning rate outputted by the actor) that the actor made. Experiments show that our method leads to good convergence of SGD and can prevent overfitting to a certain extent, resulting in better performance than human-designed competitors.
In the question response the authors mention and compare other works such as *Learning to Learn by Gradient Descent by Gradient Descent*, but the goal of current work and that work is quite different. That work is a new form of optimization algorithm which is not the case here. And bayesian hyper-parameter optimization methods aim for multiple hyper-parameters but this work only tune one hyper-parameter. The network architecture used for the experiments on CIFAR-10 is quite outdated and the performances are much poorer than any work that has published in last few years. So the comparison are not valid here, as if the paper claim the advantage of their method, they should use the state of the art network architecture and see if their claim still holds in that setting too. As discussed before, the extra cost of hyper-parameter optimizers are only justified if the method could push the SOTA results in multiple modern datasets. In summary, the general idea of having an actor-critic network as a meta-learner is an interesting idea. But the particular application proposed here does not seems to have any practical value and the reported results are very limited and it*s hard to draw any conclusion about the effectiveness of the method.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn Actor-critic Algorithm for Learning Rate Learning\nStochastic gradient descent (SGD), which updates the model parameters by adding a local gradient times a learning rate at each step, is widely used in model training of machine learning algorithms such as neural networks. It is observed that the models trained by SGD are sensitive to learning rates and good learning rates are problem specific. To avoid manually searching of learning rates, which is tedious and inefficient, we propose an algorithm to automatically learn learning rates using actor-critic methods from reinforcement learning (RL). In particular, we train a policy network called actor to decide the learning rate at each step during training, and a value network called critic to give feedback about quality of the decision (e.g., the goodness of the learning rate outputted by the actor) that the actor made. Experiments show that our method leads to good convergence of SGD and can prevent overfitting to a certain extent, resulting in better performance than human-designed competitors.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nIn the question response the authors mention and compare other works such as *Learning to Learn by Gradient Descent by Gradient Descent*, but the goal of current work and that work is quite different. That work is a new form of optimization algorithm which is not the case here. And bayesian hyper-parameter optimization methods aim for multiple hyper-parameters but this work only tune one hyper-parameter. The network architecture used for the experiments on CIFAR-10 is quite outdated and the performances are much poorer than any work that has published in last few years. So the comparison are not valid here, as if the paper claim the advantage of their method, they should use the state of the art network architecture and see if their claim still holds in that setting too. As discussed before, the extra cost of hyper-parameter optimizers are only justified if the method could push the SOTA results in multiple modern datasets. In summary, the general idea of having an actor-critic network as a meta-learner is an interesting idea. But the particular application proposed here does not seems to have any practical value and the reported results are very limited and it*s hard to draw any conclusion about the effectiveness of the method.", "role": "assistant" } ]
[ "In the question response the authors mention and compare other works such as *Learning to Learn by Gradient Descent by Gradient Descent*, but the goal of current work and that work is quite different.", "That work is a new form of optimization algorithm which is not the case here.", "And bayesian hyper-parameter optimization methods aim for multiple hyper-parameters but this work only tune one hyper-parameter.", "The network architecture used for the experiments on CIFAR-10 is quite outdated and the performances are much poorer than any work that has published in last few years.", "So the comparison are not valid here, as if the paper claim the advantage of their method, they should use the state of the art network architecture and see if their claim still holds in that setting too.", "As discussed before, the extra cost of hyper-parameter optimizers are only justified if the method could push the SOTA results in multiple modern datasets.", "In summary, the general idea of having an actor-critic network as a meta-learner is an interesting idea.", "But the particular application proposed here does not seems to have any practical value and the reported results are very limited and it*s hard to draw any conclusion about the effectiveness of the method." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 4, "materials_and_methods": 6, "praise": 1, "presentation_and_reporting": 0, "results_and_discussion": 3, "suggestion_and_solution": 1, "total": 8 }
0.375
0
0.5
0.75
0.125
0
0.375
0.125
8
1
0
1
1
1
0
1
1
2.25
1.855439
0.394561
Training Group Orthogonal Neural Networks with Privileged Information
Learning rich and diverse feature representation are always desired for deep convolutional neural networks (CNNs). Besides, when auxiliary annotations are available for specific data, simply ignoring them would be a great waste. In this paper, we incorporate these auxiliary annotations as privileged information and propose a novel CNN model that is able to maximize inherent diversity of a CNN model such that the model can learn better feature representation with a stronger generalization ability. More specifically, we propose a group orthogonal convolutional neural network (GoCNN) to learn features from foreground and background in an orthogonal way by exploiting privileged information for optimization, which automatically emphasizes feature diversity within a single model. Experiments on two benchmark datasets, ImageNet and PASCAL VOC, well demonstrate the effectiveness and high generalization ability of our proposed GoCNN models.
This paper proposes to learn groups of orthogonal features in a convnet by penalizing correlation among features in each group. The technique is applied in the setting of image classification with “privileged information” in the form of foreground segmentation masks, where the model is trained to learn orthogonal groups of foreground and background features using the correlation penalty and an additional “background suppression” term. Pros: Proposes a “group-wise model diversity” loss term which is novel, to my knowledge. The use of foreground segmentation masks to improve image classification is also novel. The method is evaluated on two standard and relatively large-scale vision datasets: ImageNet and PASCAL VOC 2012. Cons: The evaluation is lacking. There should be a baseline that leaves out the background suppression term, so readers know how much that term is contributing to the performance vs. the group orthogonal term. The use of the background suppression term is also confusing to me -- it seems redundant, as the group orthogonality term should already serve to suppress the use of background features by the foreground feature extractor. It would be nice to see the results with “Incomplete Privileged Information” on the full ImageNet dataset (rather than just 10% of it) with the privileged information included for the 10% of images where it’s available. This would verify that the method and use of segmentation masks remains useful even in the regime of more labeled classification data. The presentation overall is a bit confusing and difficult to follow, for me. For example, Section 4.2 is titled “A Unified Architecture: GoCNN”, yet it is not an overview of the method as a whole, but a list of specific implementation details (even the very first sentence). Minor: calling eq 3 a “regression loss” and writing “||0 - x||” rather than just “||x||” is not necessary and makes understanding more difficult -- I’ve never seen a norm regularization term written this way or described as a “regression to 0”. Minor: in fig. 1 I think the FG and BG suppression labels are swapped: e.g., the “suppress foreground” mask has 1s in the FG and 0s in the BG (which would suppress the BG, not the FG). An additional question: why are the results in Table 4 with 100% privileged information different from those in Table 1-2? Are these not the same setting? The ideas presented in this paper are novel and show some promise, but are currently not sufficiently ablated for readers to understand what aspects of the method are important. Besides additional experiments, the paper could also use some reorganization and revision for clarity. =============== Edit (1/29/17): after considering the latest revisions -- particularly the full ImageNet evaluation results reported in Table 5 demonstrating that the background segmentation *privileged information* is beneficial even with the full labeled ImageNet dataset -- I*ve upgraded my rating from 4 to 6. (I*ll reiterate a very minor point about Figure 1 though: I still think the *0* and *1* labels in the top part of the figures should be swapped to match the other labels. e.g., the topmost path in figure 1a, with the text *suppress foreground*, currently has 0 in the background and 1 in the foreground, when one would want the reverse of this to suppress the foreground.)
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining Group Orthogonal Neural Networks with Privileged Information\nLearning rich and diverse feature representation are always desired for deep convolutional neural networks (CNNs). Besides, when auxiliary annotations are available for specific data, simply ignoring them would be a great waste. In this paper, we incorporate these auxiliary annotations as privileged information and propose a novel CNN model that is able to maximize inherent diversity of a CNN model such that the model can learn better feature representation with a stronger generalization ability. More specifically, we propose a group orthogonal convolutional neural network (GoCNN) to learn features from foreground and background in an orthogonal way by exploiting privileged information for optimization, which automatically emphasizes feature diversity within a single model. Experiments on two benchmark datasets, ImageNet and PASCAL VOC, well demonstrate the effectiveness and high generalization ability of our proposed GoCNN models.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to learn groups of orthogonal features in a convnet by penalizing correlation among features in each group. The technique is applied in the setting of image classification with “privileged information” in the form of foreground segmentation masks, where the model is trained to learn orthogonal groups of foreground and background features using the correlation penalty and an additional “background suppression” term. Pros: Proposes a “group-wise model diversity” loss term which is novel, to my knowledge. The use of foreground segmentation masks to improve image classification is also novel. The method is evaluated on two standard and relatively large-scale vision datasets: ImageNet and PASCAL VOC 2012. Cons: The evaluation is lacking. There should be a baseline that leaves out the background suppression term, so readers know how much that term is contributing to the performance vs. the group orthogonal term. The use of the background suppression term is also confusing to me -- it seems redundant, as the group orthogonality term should already serve to suppress the use of background features by the foreground feature extractor. It would be nice to see the results with “Incomplete Privileged Information” on the full ImageNet dataset (rather than just 10% of it) with the privileged information included for the 10% of images where it’s available. This would verify that the method and use of segmentation masks remains useful even in the regime of more labeled classification data. The presentation overall is a bit confusing and difficult to follow, for me. For example, Section 4.2 is titled “A Unified Architecture: GoCNN”, yet it is not an overview of the method as a whole, but a list of specific implementation details (even the very first sentence). Minor: calling eq 3 a “regression loss” and writing “||0 - x||” rather than just “||x||” is not necessary and makes understanding more difficult -- I’ve never seen a norm regularization term written this way or described as a “regression to 0”. Minor: in fig. 1 I think the FG and BG suppression labels are swapped: e.g., the “suppress foreground” mask has 1s in the FG and 0s in the BG (which would suppress the BG, not the FG). An additional question: why are the results in Table 4 with 100% privileged information different from those in Table 1-2? Are these not the same setting? The ideas presented in this paper are novel and show some promise, but are currently not sufficiently ablated for readers to understand what aspects of the method are important. Besides additional experiments, the paper could also use some reorganization and revision for clarity. =============== Edit (1/29/17): after considering the latest revisions -- particularly the full ImageNet evaluation results reported in Table 5 demonstrating that the background segmentation *privileged information* is beneficial even with the full labeled ImageNet dataset -- I*ve upgraded my rating from 4 to 6. (I*ll reiterate a very minor point about Figure 1 though: I still think the *0* and *1* labels in the top part of the figures should be swapped to match the other labels. e.g., the topmost path in figure 1a, with the text *suppress foreground*, currently has 0 in the background and 1 in the foreground, when one would want the reverse of this to suppress the foreground.)", "role": "assistant" } ]
[ "This paper proposes to learn groups of orthogonal features in a convnet by penalizing correlation among features in each group.", "The technique is applied in the setting of image classification with “privileged information” in the form of foreground segmentation masks, where the model is trained to learn orthogonal groups of foreground and background features using the correlation penalty and an additional “background suppression” term.", "Pros: Proposes a “group-wise model diversity” loss term which is novel, to my knowledge.", "The use of foreground segmentation masks to improve image classification is also novel.", "The method is evaluated on two standard and relatively large-scale vision datasets: ImageNet and PASCAL VOC 2012.", "Cons: The evaluation is lacking.", "There should be a baseline that leaves out the background suppression term, so readers know how much that term is contributing to the performance vs. the group orthogonal term.", "The use of the background suppression term is also confusing to me -- it seems redundant, as the group orthogonality term should already serve to suppress the use of background features by the foreground feature extractor.", "It would be nice to see the results with “Incomplete Privileged Information” on the full ImageNet dataset (rather than just 10% of it) with the privileged information included for the 10% of images where it’s available.", "This would verify that the method and use of segmentation masks remains useful even in the regime of more labeled classification data.", "The presentation overall is a bit confusing and difficult to follow, for me.", "For example, Section 4.2 is titled “A Unified Architecture: GoCNN”, yet it is not an overview of the method as a whole, but a list of specific implementation details (even the very first sentence).", "Minor: calling eq 3 a “regression loss” and writing “||0 - x||” rather than just “||x||” is not necessary and makes understanding more difficult -- I’ve never seen a norm regularization term written this way or described as a “regression to 0”.", "Minor: in fig.", "1 I think the FG and BG suppression labels are swapped: e.g., the “suppress foreground” mask has 1s in the FG and 0s in the BG (which would suppress the BG, not the FG).", "An additional question: why are the results in Table 4 with 100% privileged information different from those in Table 1-2?", "Are these not the same setting?", "The ideas presented in this paper are novel and show some promise, but are currently not sufficiently ablated for readers to understand what aspects of the method are important.", "Besides additional experiments, the paper could also use some reorganization and revision for clarity.", "=============== Edit (1/29/17): after considering the latest revisions -- particularly the full ImageNet evaluation results reported in Table 5 demonstrating that the background segmentation *privileged information* is beneficial even with the full labeled ImageNet dataset -- I*ve upgraded my rating from 4 to 6.", "(I*ll reiterate a very minor point about Figure 1 though: I still think the *0* and *1* labels in the top part of the figures should be swapped to match the other labels.", "e.g., the topmost path in figure 1a, with the text *suppress foreground*, currently has 0 in the background and 1 in the foreground, when one would want the reverse of this to suppress the foreground.)" ]
{ "criticism": 6, "example": 5, "importance_and_relevance": 2, "materials_and_methods": 13, "praise": 1, "presentation_and_reporting": 11, "results_and_discussion": 7, "suggestion_and_solution": 5, "total": 22 }
0.272727
0.227273
0.090909
0.590909
0.045455
0.5
0.318182
0.227273
22
1
1
1
1
1
1
1
1
2.272727
0.99435
1.278377
Progressive Attention Networks for Visual Attribute Prediction
We propose a novel attention model which can accurately attend to target objects of various scales and shapes in images. The model is trained to gradually suppress irrelevant regions in an input image via a progressive attentive process over multiple layers of a convolutional neural network. The attentive process in each layer determines whether to pass or suppress features at certain spatial locations for use in the next layer. We further employ local contexts to estimate attention probability at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets show that the proposed attention network outperforms traditional attention methods in visual attribute prediction tasks.
This paper presents a hierarchical attention model that uses multiple stacked layers of soft attention in a convnet. The authors provide results on a synthetic dataset in addition to doing attribute prediction on the Visual Genome dataset. Overall I think this is a well executed paper, with good experimental results and nice qualitative visualizations. The main thing I believe it is missing would be experiments on a dataset like VQA which would help better place the significance of this work in context of other approaches. An important missing citation is Graves 2013 which had an early version of the attention model. Minor typo: *It confins possible attributes..* -> It confines.. *ImageNet (Deng et al., 2009), is used, and three additional* -> *.., are used,*
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nProgressive Attention Networks for Visual Attribute Prediction\nWe propose a novel attention model which can accurately attend to target objects of various scales and shapes in images. The model is trained to gradually suppress irrelevant regions in an input image via a progressive attentive process over multiple layers of a convolutional neural network. The attentive process in each layer determines whether to pass or suppress features at certain spatial locations for use in the next layer. We further employ local contexts to estimate attention probability at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets show that the proposed attention network outperforms traditional attention methods in visual attribute prediction tasks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper presents a hierarchical attention model that uses multiple stacked layers of soft attention in a convnet. The authors provide results on a synthetic dataset in addition to doing attribute prediction on the Visual Genome dataset. Overall I think this is a well executed paper, with good experimental results and nice qualitative visualizations. The main thing I believe it is missing would be experiments on a dataset like VQA which would help better place the significance of this work in context of other approaches. An important missing citation is Graves 2013 which had an early version of the attention model. Minor typo: *It confins possible attributes..* -> It confines.. *ImageNet (Deng et al., 2009), is used, and three additional* -> *.., are used,*", "role": "assistant" } ]
[ "This paper presents a hierarchical attention model that uses multiple stacked layers of soft attention in a convnet.", "The authors provide results on a synthetic dataset in addition to doing attribute prediction on the Visual Genome dataset.", "Overall I think this is a well executed paper, with good experimental results and nice qualitative visualizations.", "The main thing I believe it is missing would be experiments on a dataset like VQA which would help better place the significance of this work in context of other approaches.", "An important missing citation is Graves 2013 which had an early version of the attention model.", "Minor typo: *It confins possible attributes..* -> It confines.. *ImageNet (Deng et al., 2009), is used, and three additional* -> *.., are used,*" ]
{ "criticism": 1, "example": 1, "importance_and_relevance": 1, "materials_and_methods": 5, "praise": 1, "presentation_and_reporting": 2, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 6 }
0.166667
0.166667
0.166667
0.833333
0.166667
0.333333
0.333333
0.166667
6
1
1
1
1
1
1
1
1
2.333333
1.559994
0.773339
Fuzzy paraphrases in learning word representations with a lexicon
A synonym of a polysemous word is usually only the paraphrase of one sense among many. When lexicons are used to improve vector-space word representations, such paraphrases are unreliable and bring noise to the vector-space. The prior works use a coefficient to adjust the overall learning of the lexicons. They regard the paraphrases equally. In this paper, we propose a novel approach that regards the paraphrases diversely to alleviate the adverse effects of polysemy. We annotate each paraphrase with a degree of reliability. The paraphrases are randomly eliminated according to the degrees when our model learns word representations. In this way, our approach drops the unreliable paraphrases, keeping more reliable paraphrases at the same time. The experimental results show that the proposed method improves the word vectors. Our approach is an attempt to address the polysemy problem keeping one vector per word. It makes the approach easier to use than the conventional methods that estimate multiple vectors for a word. Our approach also outperforms the prior works in the experiments.
This paper introduces the concept of fuzzy paraphrases to aid in the learning of distributed word representations from a corpus augmented by a lexicon or ontology. Sometimes polysemy is context-dependent, but prior approaches have neglected this fact when incorporating external paraphrase information during learning. The main idea is to introduce a function that essentially judges the context-sensitivity of paraphrase candidates, down-weighting those candidates that depend strongly on context. This function is inferred from bilingual translation agreement. The main argumentation leading to the model selection is intuitive, and I believe that the inclusion of good paraphrases and the elimination of bad paraphrases during training should in principle improve word representation quality. However, the main questions are how well the proposed method achieves this goal, and, even if it achieves it well, whether it makes much difference in practical terms. Regarding the first question, I am not entirely convinced that the parameterization of the control function f(x_ij) is optimal. It would have been nice to see some experiments investigating different choices, in particular some baselines where the effect of f is diminished (so that it reduces to f=1 in the limit) would have been interesting. I also feel like there would be a lot to gain from having f be a function of the nearby word embeddings, though this would obvious incur a significant slowdown. (See for example *Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space* by Neelakantan et al, which should probably be cited.) As it stands, the experimental results do not clearly distinguish the fuzzy paraphrase approach from prior work, i.e. tables 3 and 4 do not show major trends one way or the other. Regarding the second question, it is hard to draw many conclusions from analogy tasks alone, especially when effects unrelated to good/bad paraphrasing such as corpus size/content, window size, vocabulary size, etc., can have an outsize effect on performance. Overall, I think this is a good paper presenting a sensible idea, but I am not convinced by the experiments that the specific approach is achieving its goal. With some improved experiments and analysis, I would wholeheartedly recommend this paper for acceptance; as it stands, I am on the fence.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFuzzy paraphrases in learning word representations with a lexicon\nA synonym of a polysemous word is usually only the paraphrase of one sense among many. When lexicons are used to improve vector-space word representations, such paraphrases are unreliable and bring noise to the vector-space. The prior works use a coefficient to adjust the overall learning of the lexicons. They regard the paraphrases equally. In this paper, we propose a novel approach that regards the paraphrases diversely to alleviate the adverse effects of polysemy. We annotate each paraphrase with a degree of reliability. The paraphrases are randomly eliminated according to the degrees when our model learns word representations. In this way, our approach drops the unreliable paraphrases, keeping more reliable paraphrases at the same time. The experimental results show that the proposed method improves the word vectors. Our approach is an attempt to address the polysemy problem keeping one vector per word. It makes the approach easier to use than the conventional methods that estimate multiple vectors for a word. Our approach also outperforms the prior works in the experiments.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces the concept of fuzzy paraphrases to aid in the learning of distributed word representations from a corpus augmented by a lexicon or ontology. Sometimes polysemy is context-dependent, but prior approaches have neglected this fact when incorporating external paraphrase information during learning. The main idea is to introduce a function that essentially judges the context-sensitivity of paraphrase candidates, down-weighting those candidates that depend strongly on context. This function is inferred from bilingual translation agreement. The main argumentation leading to the model selection is intuitive, and I believe that the inclusion of good paraphrases and the elimination of bad paraphrases during training should in principle improve word representation quality. However, the main questions are how well the proposed method achieves this goal, and, even if it achieves it well, whether it makes much difference in practical terms. Regarding the first question, I am not entirely convinced that the parameterization of the control function f(x_ij) is optimal. It would have been nice to see some experiments investigating different choices, in particular some baselines where the effect of f is diminished (so that it reduces to f=1 in the limit) would have been interesting. I also feel like there would be a lot to gain from having f be a function of the nearby word embeddings, though this would obvious incur a significant slowdown. (See for example *Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space* by Neelakantan et al, which should probably be cited.) As it stands, the experimental results do not clearly distinguish the fuzzy paraphrase approach from prior work, i.e. tables 3 and 4 do not show major trends one way or the other. Regarding the second question, it is hard to draw many conclusions from analogy tasks alone, especially when effects unrelated to good/bad paraphrasing such as corpus size/content, window size, vocabulary size, etc., can have an outsize effect on performance. Overall, I think this is a good paper presenting a sensible idea, but I am not convinced by the experiments that the specific approach is achieving its goal. With some improved experiments and analysis, I would wholeheartedly recommend this paper for acceptance; as it stands, I am on the fence.", "role": "assistant" } ]
[ "This paper introduces the concept of fuzzy paraphrases to aid in the learning of distributed word representations from a corpus augmented by a lexicon or ontology.", "Sometimes polysemy is context-dependent, but prior approaches have neglected this fact when incorporating external paraphrase information during learning.", "The main idea is to introduce a function that essentially judges the context-sensitivity of paraphrase candidates, down-weighting those candidates that depend strongly on context.", "This function is inferred from bilingual translation agreement.", "The main argumentation leading to the model selection is intuitive, and I believe that the inclusion of good paraphrases and the elimination of bad paraphrases during training should in principle improve word representation quality.", "However, the main questions are how well the proposed method achieves this goal, and, even if it achieves it well, whether it makes much difference in practical terms.", "Regarding the first question, I am not entirely convinced that the parameterization of the control function f(x_ij) is optimal.", "It would have been nice to see some experiments investigating different choices, in particular some baselines where the effect of f is diminished (so that it reduces to f=1 in the limit) would have been interesting.", "I also feel like there would be a lot to gain from having f be a function of the nearby word embeddings, though this would obvious incur a significant slowdown.", "(See for example *Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space* by Neelakantan et al, which should probably be cited.)", "As it stands, the experimental results do not clearly distinguish the fuzzy paraphrase approach from prior work, i.e.", "tables 3 and 4 do not show major trends one way or the other.", "Regarding the second question, it is hard to draw many conclusions from analogy tasks alone, especially when effects unrelated to good/bad paraphrasing such as corpus size/content, window size, vocabulary size, etc., can have an outsize effect on performance.", "Overall, I think this is a good paper presenting a sensible idea, but I am not convinced by the experiments that the specific approach is achieving its goal.", "With some improved experiments and analysis, I would wholeheartedly recommend this paper for acceptance; as it stands, I am on the fence." ]
{ "criticism": 6, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 10, "praise": 4, "presentation_and_reporting": 4, "results_and_discussion": 5, "suggestion_and_solution": 4, "total": 15 }
0.4
0.066667
0.133333
0.666667
0.266667
0.266667
0.333333
0.266667
15
1
1
1
1
1
1
1
1
2.4
2.33687
0.06313
Sampling Generative Networks
We introduce several techniques for sampling and visualizing the latent spaces of generative models. Replacing linear interpolation with spherical linear interpolation prevents diverging from a model*s prior distribution and produces sharper samples. J-Diagrams and MINE grids are introduced as visualizations of manifolds created by analogies and nearest neighbors. We demonstrate two new techniques for deriving attribute vectors: bias-corrected vectors with data replication and synthetic vectors with data augmentation. Binary classification using attribute vectors is presented as a technique supporting quantitative analysis of the latent space. Most techniques are intended to be independent of model type and examples are shown on both Variational Autoencoders and Generative Adversarial Networks.
This paper proposes a variety of techniques for visualizing learned generative models, focussing specifically on VAE and GAN models. This paper is somewhat challenging to assess since it doesn*t propose a new algorithm, model, application etc. On the one hand these techniques will be highly relevant to the generative modeling community and I think this paper deserves a wide audience. The techniques proposed are simple, well explained, and of immediate use to those working on generative models. However, I*m not sure the paper is appropriate for an ICLR conference track as it doesn*t provide any greater theoretical insights into sampling generative models and there are no comparisons / quantitative evaluations of the techniques proposed. Overall, I*m very much on the fence since I think the techniques are useful and this paper should be read by those interested in generating modeling. I would be willing to increase my core if the author could present a case for why ICLR is an appropriate venue for this work.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSampling Generative Networks\nWe introduce several techniques for sampling and visualizing the latent spaces of generative models. Replacing linear interpolation with spherical linear interpolation prevents diverging from a model*s prior distribution and produces sharper samples. J-Diagrams and MINE grids are introduced as visualizations of manifolds created by analogies and nearest neighbors. We demonstrate two new techniques for deriving attribute vectors: bias-corrected vectors with data replication and synthetic vectors with data augmentation. Binary classification using attribute vectors is presented as a technique supporting quantitative analysis of the latent space. Most techniques are intended to be independent of model type and examples are shown on both Variational Autoencoders and Generative Adversarial Networks.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes a variety of techniques for visualizing learned generative models, focussing specifically on VAE and GAN models. This paper is somewhat challenging to assess since it doesn*t propose a new algorithm, model, application etc. On the one hand these techniques will be highly relevant to the generative modeling community and I think this paper deserves a wide audience. The techniques proposed are simple, well explained, and of immediate use to those working on generative models. However, I*m not sure the paper is appropriate for an ICLR conference track as it doesn*t provide any greater theoretical insights into sampling generative models and there are no comparisons / quantitative evaluations of the techniques proposed. Overall, I*m very much on the fence since I think the techniques are useful and this paper should be read by those interested in generating modeling. I would be willing to increase my core if the author could present a case for why ICLR is an appropriate venue for this work.", "role": "assistant" } ]
[ "This paper proposes a variety of techniques for visualizing learned generative models, focussing specifically on VAE and GAN models.", "This paper is somewhat challenging to assess since it doesn*t propose a new algorithm, model, application etc.", "On the one hand these techniques will be highly relevant to the generative modeling community and I think this paper deserves a wide audience.", "The techniques proposed are simple, well explained, and of immediate use to those working on generative models.", "However, I*m not sure the paper is appropriate for an ICLR conference track as it doesn*t provide any greater theoretical insights into sampling generative models and there are no comparisons / quantitative evaluations of the techniques proposed.", "Overall, I*m very much on the fence since I think the techniques are useful and this paper should be read by those interested in generating modeling.", "I would be willing to increase my core if the author could present a case for why ICLR is an appropriate venue for this work." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 6, "materials_and_methods": 6, "praise": 3, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 2, "total": 7 }
0.285714
0
0.857143
0.857143
0.428571
0.285714
0.142857
0.285714
7
1
0
1
1
1
1
1
1
3.142857
2.57469
0.568167
Semantic Noise Modeling for Better Representation Learning
Latent representation learned from multi-layered neural networks via hierarchical feature abstraction enables recent success of deep learning. Under the deep learning framework, generalization performance highly depends on the learned latent representation. In this work, we propose a novel latent space modeling method to learn better latent representation. We designed a neural network model based on the assumption that good base representation for supervised tasks can be attained by maximizing the sum of hierarchical mutual informations between the input, latent, and output variables. From this base model, we introduce a semantic noise modeling method which enables semantic perturbation on the latent space to enhance the representational power of learned latent feature. During training, latent vector representation can be stochastically perturbed by a modeled additive noise while preserving its original semantics. It implicitly brings the effect of semantic augmentation on the latent space. The proposed model can be easily learned by back-propagation with common gradient-based optimization algorithms. Experimental results show that the proposed method helps to achieve performance benefits against various previous approaches. We also provide the empirical analyses for the proposed latent space modeling method including t-SNE visualization.
The paper presents a new regularization technique for neural networks, which seeks to maximize correlation between input variables, latent variables and outputs. This is achieved by defining a measure of total correlation between these variables and decomposing it in terms of entropies and conditional entropies. Authors explain that they do not actually maximize the total correlation, but a lower-bound of it that ignores simple entropy terms, and only considers conditional entropies. It is not clearly explained what is the rationale for discarding these entropy terms. Entropies measures are applying to probability distributions (i.e. this implies that the variables in the model should be random). The link between the conditional entropy formulation and the reconstruction error is not made explicit. In order to link these two views, I would have expected, for example, a noise model for the units of the network. Later in the paper, it is claimed that the original ladder network is not suitable for supervised learning with small samples, and some empirical results seek to demonstrate this. But a more theoretical explanation why it is the case would have been welcome. The MNIST results are shown for a particular convolutional neural network architecture, however, most ladder network results for this dataset have been produced on standard fully-connected architectures. Results for such neural network architecture would have been desirable for more comparability with original ladder neural network results.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSemantic Noise Modeling for Better Representation Learning\nLatent representation learned from multi-layered neural networks via hierarchical feature abstraction enables recent success of deep learning. Under the deep learning framework, generalization performance highly depends on the learned latent representation. In this work, we propose a novel latent space modeling method to learn better latent representation. We designed a neural network model based on the assumption that good base representation for supervised tasks can be attained by maximizing the sum of hierarchical mutual informations between the input, latent, and output variables. From this base model, we introduce a semantic noise modeling method which enables semantic perturbation on the latent space to enhance the representational power of learned latent feature. During training, latent vector representation can be stochastically perturbed by a modeled additive noise while preserving its original semantics. It implicitly brings the effect of semantic augmentation on the latent space. The proposed model can be easily learned by back-propagation with common gradient-based optimization algorithms. Experimental results show that the proposed method helps to achieve performance benefits against various previous approaches. We also provide the empirical analyses for the proposed latent space modeling method including t-SNE visualization.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper presents a new regularization technique for neural networks, which seeks to maximize correlation between input variables, latent variables and outputs. This is achieved by defining a measure of total correlation between these variables and decomposing it in terms of entropies and conditional entropies. Authors explain that they do not actually maximize the total correlation, but a lower-bound of it that ignores simple entropy terms, and only considers conditional entropies. It is not clearly explained what is the rationale for discarding these entropy terms. Entropies measures are applying to probability distributions (i.e. this implies that the variables in the model should be random). The link between the conditional entropy formulation and the reconstruction error is not made explicit. In order to link these two views, I would have expected, for example, a noise model for the units of the network. Later in the paper, it is claimed that the original ladder network is not suitable for supervised learning with small samples, and some empirical results seek to demonstrate this. But a more theoretical explanation why it is the case would have been welcome. The MNIST results are shown for a particular convolutional neural network architecture, however, most ladder network results for this dataset have been produced on standard fully-connected architectures. Results for such neural network architecture would have been desirable for more comparability with original ladder neural network results.", "role": "assistant" } ]
[ "The paper presents a new regularization technique for neural networks, which seeks to maximize correlation between input variables, latent variables and outputs.", "This is achieved by defining a measure of total correlation between these variables and decomposing it in terms of entropies and conditional entropies.", "Authors explain that they do not actually maximize the total correlation, but a lower-bound of it that ignores simple entropy terms, and only considers conditional entropies.", "It is not clearly explained what is the rationale for discarding these entropy terms.", "Entropies measures are applying to probability distributions (i.e.", "this implies that the variables in the model should be random).", "The link between the conditional entropy formulation and the reconstruction error is not made explicit.", "In order to link these two views, I would have expected, for example, a noise model for the units of the network.", "Later in the paper, it is claimed that the original ladder network is not suitable for supervised learning with small samples, and some empirical results seek to demonstrate this.", "But a more theoretical explanation why it is the case would have been welcome.", "The MNIST results are shown for a particular convolutional neural network architecture, however, most ladder network results for this dataset have been produced on standard fully-connected architectures.", "Results for such neural network architecture would have been desirable for more comparability with original ladder neural network results." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 11, "praise": 0, "presentation_and_reporting": 1, "results_and_discussion": 8, "suggestion_and_solution": 4, "total": 12 }
0.166667
0
0.083333
0.916667
0
0.083333
0.666667
0.333333
12
1
0
1
1
0
1
1
1
2.25
2.234218
0.015782
Ternary Weight Decomposition and Binary Activation Encoding for Fast and Compact Neural Network
This paper aims to reduce test-time computational load of a deep neural network. Unlike previous methods which factorize a weight matrix into multiple real-valued matrices, our method factorizes both weights and activations into integer and noninteger components. In our method, the real-valued weight matrix is approximated by a multiplication of a ternary matrix and a real-valued co-efficient matrix. Since the ternary matrix consists of three integer values, {-1, 0, +1}, it only consumes 2 bits per element. At test-time, an activation vector that passed from a previous layer is also transformed into a weighted sum of binary vectors, {-1, +1}, which enables fast feed-forward propagation based on simple logical operations: AND, XOR, and bit count. This makes it easier to deploy a deep network on low-power CPUs or to design specialized hardware. In our experiments, we tested our method on three different networks: a CNN for handwritten digits, VGG-16 model for ImageNet classification, and VGG-Face for large-scale face recognition. In particular, when we applied our method to three fully connected layers in the VGG-16, 15x acceleration and memory compression up to 5.2% were achieved with only a 1.43% increase in the top-5 error. Our experiments also revealed that compressing convolutional layers can accelerate inference of the entire network in exchange of slight increase in error.
This paper addresses to reduce test-time computational load of DNNs. Another factorization approach is proposed and shows good results. The comparison to the other methods is not comprehensive, the paper provides good insights.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTernary Weight Decomposition and Binary Activation Encoding for Fast and Compact Neural Network\nThis paper aims to reduce test-time computational load of a deep neural network. Unlike previous methods which factorize a weight matrix into multiple real-valued matrices, our method factorizes both weights and activations into integer and noninteger components. In our method, the real-valued weight matrix is approximated by a multiplication of a ternary matrix and a real-valued co-efficient matrix. Since the ternary matrix consists of three integer values, {-1, 0, +1}, it only consumes 2 bits per element. At test-time, an activation vector that passed from a previous layer is also transformed into a weighted sum of binary vectors, {-1, +1}, which enables fast feed-forward propagation based on simple logical operations: AND, XOR, and bit count. This makes it easier to deploy a deep network on low-power CPUs or to design specialized hardware. In our experiments, we tested our method on three different networks: a CNN for handwritten digits, VGG-16 model for ImageNet classification, and VGG-Face for large-scale face recognition. In particular, when we applied our method to three fully connected layers in the VGG-16, 15x acceleration and memory compression up to 5.2% were achieved with only a 1.43% increase in the top-5 error. Our experiments also revealed that compressing convolutional layers can accelerate inference of the entire network in exchange of slight increase in error.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper addresses to reduce test-time computational load of DNNs. Another factorization approach is proposed and shows good results. The comparison to the other methods is not comprehensive, the paper provides good insights.", "role": "assistant" } ]
[ "This paper addresses to reduce test-time computational load of DNNs.", "Another factorization approach is proposed and shows good results.", "The comparison to the other methods is not comprehensive, the paper provides good insights." ]
{ "criticism": 1, "example": 0, "importance_and_relevance": 1, "materials_and_methods": 3, "praise": 2, "presentation_and_reporting": 0, "results_and_discussion": 1, "suggestion_and_solution": 0, "total": 3 }
0.333333
0
0.333333
1
0.666667
0
0.333333
0
3
1
0
1
1
1
0
1
0
2.666667
1.088424
1.578243
Vocabulary Selection Strategies for Neural Machine Translation
Classical translation models constrain the space of possible outputs by selecting a subset of translation rules based on the input sentence. Recent work on improving the efficiency of neural translation models adopted a similar strategy by restricting the output vocabulary to a subset of likely candidates given the source. In this paper we experiment with context and embedding-based selection methods and extend previous work by examining speed and accuracy trade-offs in more detail. We show that decoding time on CPUs can be reduced by up to 90% and training time by 25% on the WMT15 English-German and WMT16 English-Romanian tasks at the same or only negligible change in accuracy. This brings the time to decode with a state of the art neural translation system to just over 140 words per seconds on a single CPU core for English-German.
This paper conducts a comprehensive series of experiments on vocabulary selection strategies to reduce the computational cost of neural machine translation. A range of techniques are investigated, ranging from very simple methods such as word co-occurences, to the relatively complex use of SVMs. The experiments are solid, comprehensive and very useful in practical terms. It is good to see that the best vocabulary selection method is very effective at achieving a very high proportion of the coverage of the full-vocabulary model (fig 3). However, I feel that the experiments in section 4.3 (vocabulary selection during training) was rather limited in their scope - I would have liked to see more experiments here. A major criticism I have with this paper is that there is little novelty here. The techniques are mostly standard methods and rather simple, and in particular, there it seems that there is not much additional material beyond the work of Mi et al (2016). So although the work is solid, the lack of originality lets it down. Minor comments: in 2.1, the word co-occurence measure - was any smoothing used to make this measure more robust to low counts?
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVocabulary Selection Strategies for Neural Machine Translation\nClassical translation models constrain the space of possible outputs by selecting a subset of translation rules based on the input sentence. Recent work on improving the efficiency of neural translation models adopted a similar strategy by restricting the output vocabulary to a subset of likely candidates given the source. In this paper we experiment with context and embedding-based selection methods and extend previous work by examining speed and accuracy trade-offs in more detail. We show that decoding time on CPUs can be reduced by up to 90% and training time by 25% on the WMT15 English-German and WMT16 English-Romanian tasks at the same or only negligible change in accuracy. This brings the time to decode with a state of the art neural translation system to just over 140 words per seconds on a single CPU core for English-German.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper conducts a comprehensive series of experiments on vocabulary selection strategies to reduce the computational cost of neural machine translation. A range of techniques are investigated, ranging from very simple methods such as word co-occurences, to the relatively complex use of SVMs. The experiments are solid, comprehensive and very useful in practical terms. It is good to see that the best vocabulary selection method is very effective at achieving a very high proportion of the coverage of the full-vocabulary model (fig 3). However, I feel that the experiments in section 4.3 (vocabulary selection during training) was rather limited in their scope - I would have liked to see more experiments here. A major criticism I have with this paper is that there is little novelty here. The techniques are mostly standard methods and rather simple, and in particular, there it seems that there is not much additional material beyond the work of Mi et al (2016). So although the work is solid, the lack of originality lets it down. Minor comments: in 2.1, the word co-occurence measure - was any smoothing used to make this measure more robust to low counts?", "role": "assistant" } ]
[ "This paper conducts a comprehensive series of experiments on vocabulary selection strategies to reduce the computational cost of neural machine translation.", "A range of techniques are investigated, ranging from very simple methods such as word co-occurences, to the relatively complex use of SVMs.", "The experiments are solid, comprehensive and very useful in practical terms.", "It is good to see that the best vocabulary selection method is very effective at achieving a very high proportion of the coverage of the full-vocabulary model (fig 3).", "However, I feel that the experiments in section 4.3 (vocabulary selection during training) was rather limited in their scope - I would have liked to see more experiments here.", "A major criticism I have with this paper is that there is little novelty here.", "The techniques are mostly standard methods and rather simple, and in particular, there it seems that there is not much additional material beyond the work of Mi et al (2016).", "So although the work is solid, the lack of originality lets it down.", "Minor comments: in 2.1, the word co-occurence measure - was any smoothing used to make this measure more robust to low counts?" ]
{ "criticism": 4, "example": 2, "importance_and_relevance": 4, "materials_and_methods": 7, "praise": 5, "presentation_and_reporting": 1, "results_and_discussion": 0, "suggestion_and_solution": 1, "total": 9 }
0.444444
0.222222
0.444444
0.777778
0.555556
0.111111
0
0.111111
9
1
1
1
1
1
1
0
1
2.666667
2.414148
0.252519
Learning Efficient Algorithms with Hierarchical Attentive Memory
In this paper, we propose and investigate a novel memory architecture for neural networks called Hierarchical Attentive Memory (HAM). It is based on a binary tree with leaves corresponding to memory cells. This allows HAM to perform memory access in O(log n) complexity, which is a significant improvement over the standard attention mechanism that requires O(n) operations, where n is the size of the memory. We show that an LSTM network augmented with HAM can learn algorithms for problems like merging, sorting or binary searching from pure input-output examples. In particular, it learns to sort n numbers in time O(n log n) and generalizes well to input sequences much longer than the ones seen during the training. We also show that HAM can be trained to act like classic data structures: a stack, a FIFO queue and a priority queue.
This paper introduces a novel hierarchical memory architecture for neural networks, based on a binary tree with leaves corresponding to memory cells. This allows for O(log n) memory access, and experiments additionally demonstrate ability to solve more challenging tasks such as sorting from pure input-output examples and dealing with longer sequences. The idea of the paper is novel and well-presented, and the memory structure seems reasonable to have advantages in practice. However, the main weakness of the paper is the experiments. There is no experimental comparison with other external memory-based approaches (e.g. those discussed in Related Work), or experimental analysis of computational efficiency given overhead costs (beyond just computational complexity) despite that being one of the main advantages. Furthermore, the experimental setups are relatively weak, all on artificial tasks with moderate increases in sequence length. Improving on these would greatly strengthen the paper, as the core idea is interesting.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Efficient Algorithms with Hierarchical Attentive Memory\nIn this paper, we propose and investigate a novel memory architecture for neural networks called Hierarchical Attentive Memory (HAM). It is based on a binary tree with leaves corresponding to memory cells. This allows HAM to perform memory access in O(log n) complexity, which is a significant improvement over the standard attention mechanism that requires O(n) operations, where n is the size of the memory. We show that an LSTM network augmented with HAM can learn algorithms for problems like merging, sorting or binary searching from pure input-output examples. In particular, it learns to sort n numbers in time O(n log n) and generalizes well to input sequences much longer than the ones seen during the training. We also show that HAM can be trained to act like classic data structures: a stack, a FIFO queue and a priority queue.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper introduces a novel hierarchical memory architecture for neural networks, based on a binary tree with leaves corresponding to memory cells. This allows for O(log n) memory access, and experiments additionally demonstrate ability to solve more challenging tasks such as sorting from pure input-output examples and dealing with longer sequences. The idea of the paper is novel and well-presented, and the memory structure seems reasonable to have advantages in practice. However, the main weakness of the paper is the experiments. There is no experimental comparison with other external memory-based approaches (e.g. those discussed in Related Work), or experimental analysis of computational efficiency given overhead costs (beyond just computational complexity) despite that being one of the main advantages. Furthermore, the experimental setups are relatively weak, all on artificial tasks with moderate increases in sequence length. Improving on these would greatly strengthen the paper, as the core idea is interesting.", "role": "assistant" } ]
[ "This paper introduces a novel hierarchical memory architecture for neural networks, based on a binary tree with leaves corresponding to memory cells.", "This allows for O(log n) memory access, and experiments additionally demonstrate ability to solve more challenging tasks such as sorting from pure input-output examples and dealing with longer sequences.", "The idea of the paper is novel and well-presented, and the memory structure seems reasonable to have advantages in practice.", "However, the main weakness of the paper is the experiments.", "There is no experimental comparison with other external memory-based approaches (e.g.", "those discussed in Related Work), or experimental analysis of computational efficiency given overhead costs (beyond just computational complexity) despite that being one of the main advantages.", "Furthermore, the experimental setups are relatively weak, all on artificial tasks with moderate increases in sequence length.", "Improving on these would greatly strengthen the paper, as the core idea is interesting." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 6, "praise": 2, "presentation_and_reporting": 1, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 8 }
0.375
0
0.375
0.75
0.25
0.125
0.125
0.125
8
1
0
1
1
1
1
1
1
2.125
1.730439
0.394561
New Learning Approach By Genetic Algorithm In A Convolutional Neural Network For Pattern Recognition
Almost all of the presented articles in the CNN are based on the error backpropagation algorithm and calculation of derivations of error, our innovative proposal refers to engaging TICA filters and NSGA-II genetic algorithms to train the LeNet-5 CNN network. Consequently, genetic algorithm updates the weights of LeNet-5 CNN network similar to chromosome update. In our approach the weights of LeNet-5 are obtained in two stages. The first is pre-training and the second is fine-tuning. As a result, our approach impacts in learning task.
The paper is still extremely poorly written and presented despite multiple reviewers asking to address that issue. The frequent spelling mistakes and incoherent sentences and unclear presentation make reading and understanding the paper very difficult and time consuming. Consider getting help from someone with good english and presentation skills.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNew Learning Approach By Genetic Algorithm In A Convolutional Neural Network For Pattern Recognition\nAlmost all of the presented articles in the CNN are based on the error backpropagation algorithm and calculation of derivations of error, our innovative proposal refers to engaging TICA filters and NSGA-II genetic algorithms to train the LeNet-5 CNN network. Consequently, genetic algorithm updates the weights of LeNet-5 CNN network similar to chromosome update. In our approach the weights of LeNet-5 are obtained in two stages. The first is pre-training and the second is fine-tuning. As a result, our approach impacts in learning task.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThe paper is still extremely poorly written and presented despite multiple reviewers asking to address that issue. The frequent spelling mistakes and incoherent sentences and unclear presentation make reading and understanding the paper very difficult and time consuming. Consider getting help from someone with good english and presentation skills.", "role": "assistant" } ]
[ "The paper is still extremely poorly written and presented despite multiple reviewers asking to address that issue.", "The frequent spelling mistakes and incoherent sentences and unclear presentation make reading and understanding the paper very difficult and time consuming.", "Consider getting help from someone with good english and presentation skills." ]
{ "criticism": 2, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 0, "praise": 1, "presentation_and_reporting": 3, "results_and_discussion": 0, "suggestion_and_solution": 1, "total": 3 }
0.666667
0
0
0
0.333333
1
0
0.333333
3
1
0
0
0
1
1
0
1
2.333333
0.75509
1.578243
SoftTarget Regularization: An Effective Technique to Reduce Over-Fitting in Neural Networks
Deep neural networks are learning models with a very high capacity and therefore prone to over-fitting. Many regularization techniques such as Dropout, DropConnect, and weight decay all attempt to solve the problem of over-fitting by reducing the capacity of their respective models (Srivastava et al., 2014), (Wan et al., 2013), (Krogh & Hertz, 1992). In this paper we introduce a new form of regularization that guides the learning problem in a way that reduces over-fitting without sacrificing the capacity of the model. The mistakes that models make in early stages of training carry information about the learning problem. By adjusting the labels of the current epoch of training through a weighted average of the real labels, and an exponential average of the past soft-targets we achieved a regularization scheme as powerful as Dropout without necessarily reducing the capacity of the model, and simplified the complexity of the learning problem. SoftTarget regularization proved to be an effective tool in various neural network architectures.
Inspired by the analysis on the effect of the co-label similarity (Hinton et al., 2015), this paper proposes a soft-target regularization that iteratively trains the network using weighted average of the exponential moving average of past labels and hard labels as target argument of loss. They claim that this prevents the disappearing of co-label similarity after early training and yields a competitive regularization to dropout without sacrificing network capacity. In order to make a fair comparison to dropout, the dropout should be tuned carefully. Showing that it performs better than dropout regularization for some particular values of dropout (Table 2) does not demonstrate a convincing advantage. It is possible that dropout performs better after a reasonable tuning with cross-validation. The baseline architectures used in the experiments do not belong the recent state of art methods thus yielding significantly lower accuracy. It seems also that experiment setup does not involve any data augmentation, the results can also change with augmentation. It is not clear why number of epochs are set to a small number like 100 without putting some convergence tests.. Therefore the significance of the method is not convincingly demonstrated in empirical study. Co-label similarities could be calculated using softmax results at final layer rather than using predicted labels. The advantage over dropout is not clear in Figure 4, the dropout is set to 0.2 without any cross-validation. Regularizing by enforcing the training steps to keep co-label similarities is interesting idea but not very novel and the results are not significant. Pros : - provides an investigation of regularization on co-label similarity during training Cons: -The empirical results do not support the intuitive claims regarding proposed procedure Iterative version can be unstable in practice
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSoftTarget Regularization: An Effective Technique to Reduce Over-Fitting in Neural Networks\nDeep neural networks are learning models with a very high capacity and therefore prone to over-fitting. Many regularization techniques such as Dropout, DropConnect, and weight decay all attempt to solve the problem of over-fitting by reducing the capacity of their respective models (Srivastava et al., 2014), (Wan et al., 2013), (Krogh & Hertz, 1992). In this paper we introduce a new form of regularization that guides the learning problem in a way that reduces over-fitting without sacrificing the capacity of the model. The mistakes that models make in early stages of training carry information about the learning problem. By adjusting the labels of the current epoch of training through a weighted average of the real labels, and an exponential average of the past soft-targets we achieved a regularization scheme as powerful as Dropout without necessarily reducing the capacity of the model, and simplified the complexity of the learning problem. SoftTarget regularization proved to be an effective tool in various neural network architectures.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nInspired by the analysis on the effect of the co-label similarity (Hinton et al., 2015), this paper proposes a soft-target regularization that iteratively trains the network using weighted average of the exponential moving average of past labels and hard labels as target argument of loss. They claim that this prevents the disappearing of co-label similarity after early training and yields a competitive regularization to dropout without sacrificing network capacity. In order to make a fair comparison to dropout, the dropout should be tuned carefully. Showing that it performs better than dropout regularization for some particular values of dropout (Table 2) does not demonstrate a convincing advantage. It is possible that dropout performs better after a reasonable tuning with cross-validation. The baseline architectures used in the experiments do not belong the recent state of art methods thus yielding significantly lower accuracy. It seems also that experiment setup does not involve any data augmentation, the results can also change with augmentation. It is not clear why number of epochs are set to a small number like 100 without putting some convergence tests.. Therefore the significance of the method is not convincingly demonstrated in empirical study. Co-label similarities could be calculated using softmax results at final layer rather than using predicted labels. The advantage over dropout is not clear in Figure 4, the dropout is set to 0.2 without any cross-validation. Regularizing by enforcing the training steps to keep co-label similarities is interesting idea but not very novel and the results are not significant. Pros : - provides an investigation of regularization on co-label similarity during training Cons: -The empirical results do not support the intuitive claims regarding proposed procedure Iterative version can be unstable in practice", "role": "assistant" } ]
[ "Inspired by the analysis on the effect of the co-label similarity (Hinton et al., 2015), this paper proposes a soft-target regularization that iteratively trains the network using weighted average of the exponential moving average of past labels and hard labels as target argument of loss.", "They claim that this prevents the disappearing of co-label similarity after early training and yields a competitive regularization to dropout without sacrificing network capacity.", "In order to make a fair comparison to dropout, the dropout should be tuned carefully.", "Showing that it performs better than dropout regularization for some particular values of dropout (Table 2) does not demonstrate a convincing advantage.", "It is possible that dropout performs better after a reasonable tuning with cross-validation.", "The baseline architectures used in the experiments do not belong the recent state of art methods thus yielding significantly lower accuracy.", "It seems also that experiment setup does not involve any data augmentation, the results can also change with augmentation.", "It is not clear why number of epochs are set to a small number like 100 without putting some convergence tests..", "Therefore the significance of the method is not convincingly demonstrated in empirical study.", "Co-label similarities could be calculated using softmax results at final layer rather than using predicted labels.", "The advantage over dropout is not clear in Figure 4, the dropout is set to 0.2 without any cross-validation.", "Regularizing by enforcing the training steps to keep co-label similarities is interesting idea but not very novel and the results are not significant.", "Pros : - provides an investigation of regularization on co-label similarity during training Cons: -The empirical results do not support the intuitive claims regarding proposed procedure Iterative version can be unstable in practice" ]
{ "criticism": 6, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 12, "praise": 1, "presentation_and_reporting": 1, "results_and_discussion": 7, "suggestion_and_solution": 2, "total": 13 }
0.461538
0.076923
0.153846
0.923077
0.076923
0.076923
0.538462
0.153846
13
1
1
1
1
1
1
1
1
2.461538
2.461538
0
Surprisal-Driven Feedback in Recurrent Networks
Recurrent neural nets are widely used for predicting temporal data. Their inherent deep feedforward structure allows learning complex sequential patterns. It is believed that top-down feedback might be an important missing ingredient which in theory could help disambiguate similar patterns depending on broader context. In this paper, we introduce surprisal-driven recurrent networks, which take into account past error information when making new predictions. This is achieved by continuously monitoring the discrepancy between most recent predictions and the actual observations. Furthermore, we show that it outperforms other stochastic and fully deterministic approaches on enwik8 character level prediction task achieving 1.37 BPC.
This paper proposes to use previous error signal of the output layer as an additional input to recurrent update function in order to enhance the modelling power of a dynamic system such as RNNs. -This paper makes an erroneous assumption: test label information is not given in most of the real world applications, except few applications. This means that the language modelling task, which is the only experiment of this paper, may not be the right task to test this approach. Also, comparing against the models that do not use test error signal at inference time is unfair. We cannot just say that the test label information is being observed, this only holds in online-prediction problems. -The experiment is only conducted on one dataset, reporting state-of-the-art result, but unfortunately this is not true. There are already more than four papers reporting better numbers than the one reported in this task, however the author did not cite them. I understand that this paper came before the other papers, but the manuscript should be updated before the final decision. -The model size is still missing and without this information, it is hard to judge the contribution of the proposed trick.
iclr2017
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSurprisal-Driven Feedback in Recurrent Networks\nRecurrent neural nets are widely used for predicting temporal data. Their inherent deep feedforward structure allows learning complex sequential patterns. It is believed that top-down feedback might be an important missing ingredient which in theory could help disambiguate similar patterns depending on broader context. In this paper, we introduce surprisal-driven recurrent networks, which take into account past error information when making new predictions. This is achieved by continuously monitoring the discrepancy between most recent predictions and the actual observations. Furthermore, we show that it outperforms other stochastic and fully deterministic approaches on enwik8 character level prediction task achieving 1.37 BPC.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\nThis paper proposes to use previous error signal of the output layer as an additional input to recurrent update function in order to enhance the modelling power of a dynamic system such as RNNs. -This paper makes an erroneous assumption: test label information is not given in most of the real world applications, except few applications. This means that the language modelling task, which is the only experiment of this paper, may not be the right task to test this approach. Also, comparing against the models that do not use test error signal at inference time is unfair. We cannot just say that the test label information is being observed, this only holds in online-prediction problems. -The experiment is only conducted on one dataset, reporting state-of-the-art result, but unfortunately this is not true. There are already more than four papers reporting better numbers than the one reported in this task, however the author did not cite them. I understand that this paper came before the other papers, but the manuscript should be updated before the final decision. -The model size is still missing and without this information, it is hard to judge the contribution of the proposed trick.", "role": "assistant" } ]
[ "This paper proposes to use previous error signal of the output layer as an additional input to recurrent update function in order to enhance the modelling power of a dynamic system such as RNNs.", "-This paper makes an erroneous assumption: test label information is not given in most of the real world applications, except few applications.", "This means that the language modelling task, which is the only experiment of this paper, may not be the right task to test this approach.", "Also, comparing against the models that do not use test error signal at inference time is unfair.", "We cannot just say that the test label information is being observed, this only holds in online-prediction problems.", "-The experiment is only conducted on one dataset, reporting state-of-the-art result, but unfortunately this is not true.", "There are already more than four papers reporting better numbers than the one reported in this task, however the author did not cite them.", "I understand that this paper came before the other papers, but the manuscript should be updated before the final decision.", "-The model size is still missing and without this information, it is hard to judge the contribution of the proposed trick." ]
{ "criticism": 4, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 8, "praise": 0, "presentation_and_reporting": 4, "results_and_discussion": 3, "suggestion_and_solution": 2, "total": 9 }
0.444444
0
0
0.888889
0
0.444444
0.333333
0.222222
9
1
0
0
1
0
1
1
1
2.333333
2.080814
0.252519
Recasting Gradient-Based Meta-Learning as Hierarchical Bayes
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation.
MAML (Finn+ 2017) is recast as a hierarchical Bayesian learning procedure. In particular the inner (task) training is initially cast as point-wise max likelihood estimation, and then (sec4) improved upon by making use of the Laplace approximation. Experimental evidence of the relevance of the method is provided on a toy task involving a NIW prior of Gaussians, and the (benchmark) MiniImageNet task. Casting MAML as HB seems a good idea. The paper does a good job of explaining the connection, but I think the presentation could be clarified. The role of the task prior and how it emerges from early stopping (ie a finite number of gradient descent steps) (sec 3.2) is original and technically non-trivial, and is a contribution of this paper. The synthetic data experiment sec5.1 and fig5 is clearly explained and serves to additionally clarify the proposed method. Regarding the MiniImageNet experiments, I read the exchange on TCML and agree with the authors of the paper under review. However, I recommend including the references to Mukhdalai 2017 and Sung 2017 in the footnote on TCML to strengthen the point more generically, and show that not just TCML but other non-shallow architectures are not considered for comparison here. In addition, the point made by the TCML authors is fair ("nothing prevented you from...") and I would also recommend mentioning the reviewed paper*s authors* decision (not to test deeper architectures) in the footnote. This decision is in order but needs to be stated in order for the reader to form a balanced view of methods at her disposal. The experimental performance reported Table 1 remains small and largely within one standard deviation of competitor methods. I am assessing this paper as "7" because despite the merit of the paper, the relevance of the reformulation of MAML, and the technical steps involved in the reformulation, the paper does not eg address other forms (than L-MAML) of the task-specific subroutine ML-..., and the benchmark improvements are quite small. I think the approach is good and fruitful. # Suggestions on readability * I have the feeling the paper inverts from their use in Finn 2017 (step size for meta- vs task-training). This is unfortunate and will certainly confuse readers; I advise carefully changing this throughout the entire paper (eg Algo 2,3,4, eq 1, last eq in sec3.1, eq in text below eq3, etc) * I advise avoiding the use of the symbol f, which appears in only two places in Algo 2 and the end of sec 3.1. This is in part because f is given another meaning in Finn 2017, but also out of general parsimony in symbol use. (could leave the output of ML-... implicit by writing ML-...(.theta, T)_j in the ; if absolutely needed, use another symbol than f) * Maybe sec3 can be clarified in its structure by re-ordering points on the quadratic error function and early stopping (eg avoiding to split them between end of 3.1 and 3.2). * sec6 "Machine learning and deep learning": I would definitely avoid this formulation, seems to tail in with all the media nonsense on "what*s the difference between ML and DL ?". In addition the formulation seems to contrast ML with hierarchical Bayesian modeling, which does not make sense/ is wrong and confusing. # Typos * sec1 second parag: did you really mean "in the architecture or loss function"? unclear. * sec2: over a family * "common structure, so that" (not such that) * orthgonal * sec2.1 suggestion: clarify that .theta and .phi are in the same space * sec2.2 suggestion: task-specific parameter is distinct from ... parameters .phi_j$ (I would avoid writing just .phi altogether to distinguish in usage from .theta) * Gaussian-noised * approximation of the it objective * before eq9: "that solves": well, it doesn*t really "solve" the minimisation, in that it is not a minimum; reformulate this? * sec4.1 innaccurate * well approximated * sec4.2 an curvature * (Amari 1989) * For the the Laplace * O(n^3) : what is n ? * sec5.2 (Ravi and L 2017) * for the the
iclr2018
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRecasting Gradient-Based Meta-Learning as Hierarchical Bayes\nMeta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n MAML (Finn+ 2017) is recast as a hierarchical Bayesian learning procedure. In particular the inner (task) training is initially cast as point-wise max likelihood estimation, and then (sec4) improved upon by making use of the Laplace approximation. Experimental evidence of the relevance of the method is provided on a toy task involving a NIW prior of Gaussians, and the (benchmark) MiniImageNet task. Casting MAML as HB seems a good idea. The paper does a good job of explaining the connection, but I think the presentation could be clarified. The role of the task prior and how it emerges from early stopping (ie a finite number of gradient descent steps) (sec 3.2) is original and technically non-trivial, and is a contribution of this paper. The synthetic data experiment sec5.1 and fig5 is clearly explained and serves to additionally clarify the proposed method. Regarding the MiniImageNet experiments, I read the exchange on TCML and agree with the authors of the paper under review. However, I recommend including the references to Mukhdalai 2017 and Sung 2017 in the footnote on TCML to strengthen the point more generically, and show that not just TCML but other non-shallow architectures are not considered for comparison here. In addition, the point made by the TCML authors is fair (\"nothing prevented you from...\") and I would also recommend mentioning the reviewed paper*s authors* decision (not to test deeper architectures) in the footnote. This decision is in order but needs to be stated in order for the reader to form a balanced view of methods at her disposal. The experimental performance reported Table 1 remains small and largely within one standard deviation of competitor methods. I am assessing this paper as \"7\" because despite the merit of the paper, the relevance of the reformulation of MAML, and the technical steps involved in the reformulation, the paper does not eg address other forms (than L-MAML) of the task-specific subroutine ML-..., and the benchmark improvements are quite small. I think the approach is good and fruitful. # Suggestions on readability * I have the feeling the paper inverts from their use in Finn 2017 (step size for meta- vs task-training). This is unfortunate and will certainly confuse readers; I advise carefully changing this throughout the entire paper (eg Algo 2,3,4, eq 1, last eq in sec3.1, eq in text below eq3, etc) * I advise avoiding the use of the symbol f, which appears in only two places in Algo 2 and the end of sec 3.1. This is in part because f is given another meaning in Finn 2017, but also out of general parsimony in symbol use. (could leave the output of ML-... implicit by writing ML-...(.theta, T)_j in the ; if absolutely needed, use another symbol than f) * Maybe sec3 can be clarified in its structure by re-ordering points on the quadratic error function and early stopping (eg avoiding to split them between end of 3.1 and 3.2). * sec6 \"Machine learning and deep learning\": I would definitely avoid this formulation, seems to tail in with all the media nonsense on \"what*s the difference between ML and DL ?\". In addition the formulation seems to contrast ML with hierarchical Bayesian modeling, which does not make sense/ is wrong and confusing. # Typos * sec1 second parag: did you really mean \"in the architecture or loss function\"? unclear. * sec2: over a family * \"common structure, so that\" (not such that) * orthgonal * sec2.1 suggestion: clarify that .theta and .phi are in the same space * sec2.2 suggestion: task-specific parameter is distinct from ... parameters .phi_j$ (I would avoid writing just .phi altogether to distinguish in usage from .theta) * Gaussian-noised * approximation of the it objective * before eq9: \"that solves\": well, it doesn*t really \"solve\" the minimisation, in that it is not a minimum; reformulate this? * sec4.1 innaccurate * well approximated * sec4.2 an curvature * (Amari 1989) * For the the Laplace * O(n^3) : what is n ? * sec5.2 (Ravi and L 2017) * for the the", "role": "assistant" } ]
[ " MAML (Finn+ 2017) is recast as a hierarchical Bayesian learning procedure.", "In particular the inner (task) training is initially cast as point-wise max likelihood estimation, and then (sec4) improved upon by making use of the Laplace approximation.", "Experimental evidence of the relevance of the method is provided on a toy task involving a NIW prior of Gaussians, and the (benchmark) MiniImageNet task.", "Casting MAML as HB seems a good idea.", "The paper does a good job of explaining the connection, but I think the presentation could be clarified.", "The role of the task prior and how it emerges from early stopping (ie a finite number of gradient descent steps) (sec 3.2) is original and technically non-trivial, and is a contribution of this paper.", "The synthetic data experiment sec5.1 and fig5 is clearly explained and serves to additionally clarify the proposed method.", "Regarding the MiniImageNet experiments, I read the exchange on TCML and agree with the authors of the paper under review.", "However, I recommend including the references to Mukhdalai 2017 and Sung 2017 in the footnote on TCML to strengthen the point more generically, and show that not just TCML but other non-shallow architectures are not considered for comparison here.", "In addition, the point made by the TCML authors is fair (\"nothing prevented you from...\") and I would also recommend mentioning the reviewed paper*s authors* decision (not to test deeper architectures) in the footnote.", "This decision is in order but needs to be stated in order for the reader to form a balanced view of methods at her disposal.", "The experimental performance reported Table 1 remains small and largely within one standard deviation of competitor methods.", "I am assessing this paper as \"7\" because despite the merit of the paper, the relevance of the reformulation of MAML, and the technical steps involved in the reformulation, the paper does not eg address other forms (than L-MAML) of the task-specific subroutine ML-..., and the benchmark improvements are quite small.", "I think the approach is good and fruitful.", "# Suggestions on readability * I have the feeling the paper inverts from their use in Finn 2017 (step size for meta- vs task-training).", "This is unfortunate and will certainly confuse readers; I advise carefully changing this throughout the entire paper (eg Algo 2,3,4, eq 1, last eq in sec3.1, eq in text below eq3, etc) * I advise avoiding the use of the symbol f, which appears in only two places in Algo 2 and the end of sec 3.1.", "This is in part because f is given another meaning in Finn 2017, but also out of general parsimony in symbol use.", "(could leave the output of ML-... implicit by writing ML-...(.theta, T)_j in the ; if absolutely needed, use another symbol than f) * Maybe sec3 can be clarified in its structure by re-ordering points on the quadratic error function and early stopping (eg avoiding to split them between end of 3.1 and 3.2).", "* sec6 \"Machine learning and deep learning\": I would definitely avoid this formulation, seems to tail in with all the media nonsense on \"what*s the difference between ML and DL ?", "\".", "In addition the formulation seems to contrast ML with hierarchical Bayesian modeling, which does not make sense/ is wrong and confusing.", "# Typos * sec1 second parag: did you really mean \"in the architecture or loss function\"?", "unclear.", "* sec2: over a family * \"common structure, so that\" (not such that) * orthgonal * sec2.1 suggestion: clarify that .theta and .phi are in the same space * sec2.2 suggestion: task-specific parameter is distinct from ... parameters .phi_j$ (I would avoid writing just .phi altogether to distinguish in usage from .theta) * Gaussian-noised * approximation of the it objective * before eq9: \"that solves\": well, it doesn*t really \"solve\" the minimisation, in that it is not a minimum; reformulate this?", "* sec4.1 innaccurate * well approximated * sec4.2 an curvature * (Amari 1989) * For the the Laplace * O(n^3) : what is n ?", "* sec5.2 (Ravi and L 2017) * for the the" ]
{ "criticism": 7, "example": 8, "importance_and_relevance": 5, "materials_and_methods": 12, "praise": 3, "presentation_and_reporting": 10, "results_and_discussion": 4, "suggestion_and_solution": 7, "total": 26 }
0.269231
0.307692
0.192308
0.461538
0.115385
0.384615
0.153846
0.269231
26
1
1
1
1
1
1
1
1
2.153846
-0.513384
2.667231
Recasting Gradient-Based Meta-Learning as Hierarchical Bayes
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation.
Summary The paper presents an interesting view on the recently proposed MAML formulation of meta-learning (Finn et al). The main contribution is a) insight into the connection between the MAML procedure and MAP estimation in an equivalent linear hierarchical Bayes model with explicit priors, b) insight into the connection between MAML and MAP estimation in non-linear HB models with implicit priors, c) based on these insights, the paper proposes a variant of MALM using a Laplace approximation (with additional approximations for the covariance matrix. The paper finally provides an evaluation on the mini ImageNet problem without significantly improving on the MAML results on the same task. Pro: - The topic is timely and of relevance to the ICLR community continuing a current trend in building meta-learning system for few-shot learning. - Provides valuable insight into the MAML objective and its relation to probabilistic models Con: - The paper is generally well-written but I find (as a non-meta-learner expert) that certain fundamental aspects could have been explained better or in more detail (see below for details). - The toy example is quite difficult to interpret the first time around and does not provide any empirical insight into the converge of the proposed method (compared to e.g. MAML) - I do not think the empirical results provide enough evidence that it is a useful/robust method. Especially it does not provide insight into which types of problems (small/large, linear/ non-linear) the method is applicable to. Detailed comments/questions: - The use of Laplace approximation is (in the paper) motivated from a probabilistic/Bayes and uncertainty point-of-view. It would, however, seem that the truncated iterations do not result in the approximation being very accurate during optimization as the truncation does not result in the approximation being created at a mode. Could the authors perhaps comment on: a) whether it is even meaningful to talk about the approximations as probabilistic distribution during the optimization (given the psd approximation to the Hessian), or does it only make sense after convergence? b) the consequence of the approximation errors on the general convergence of the proposed method (consistency and rate) - Sec 4.1, p5: Last equation: Perhaps useful to explain the term and why it is not in subroutine 4 . Should be ? - Sec 4.2: “A straightforward…”: I think it would improve readability to refer back to the to the previous equation (i.e. H) such that it is clear what is meant by “straightforward”. - Sec 4.2: Several ideas are being discussed in Sec 4.2 and it is not entirely clear to me what has actually been adopted here; perhaps consider formalizing the actual computations in Subroutine 4 – and provide a clearer argument (preferably proof) that this leads to consistent and robust estimator of .theta. - It is not clear from the text or experiment how the learning parameters are set. - Sec 5.1: It took some effort to understand exactly what was going on in the example and particular figure 5.1; e.g., in the model definition in the body text there is no mention of the NN mentioned/used in figure 5, the blue points are not defined in the caption, the terminology e.g. “pre-update density” is new at this point. I think it would benefit the readability to provide the reader with a bit more guidance. - Sec 5.1: While the qualitative example is useful (with a bit more text), I believe it would have been more convincing with a quantitative example to demonstrate e.g. the convergence of the proposal compared to std MAML and possibly compare to a std Bayesian inference method from the HB formulation of the problem (in the linear case) - Sec 5.2: The abstract clams increased performance over MAML but the empirical results do not seem to be significantly better than MAML ? I find it quite difficult to support the specific claim in the abstract from the results without adding a comment about the significance. - Sec 5.2: The authors have left out “Mishral et al” from the comparison due to the model being significantly larger than others. Could the authors provide insight into why they did not use the ResNet structure from the tcml paper in their L-MLMA scheme ? - Sec 6+7: The paper clearly states that it is not the aim to (generally) formulate the MAML as a HB. Given the advancement in gradient based inference for HB the last couple of years (e.g. variational, nested laplace , expectation propagation etc) for explicit models, could the authors perhaps indicate why they believe their approach of looking directly to the MAML objective is more scalable/useful than trying to formulate the same or similar objective in an explicit HB model and using established inference methods from that area ? Minor: - Sec 4.1 “…each integral in the sum in (2)…” eq 2 is a product
iclr2018
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRecasting Gradient-Based Meta-Learning as Hierarchical Bayes\nMeta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary The paper presents an interesting view on the recently proposed MAML formulation of meta-learning (Finn et al). The main contribution is a) insight into the connection between the MAML procedure and MAP estimation in an equivalent linear hierarchical Bayes model with explicit priors, b) insight into the connection between MAML and MAP estimation in non-linear HB models with implicit priors, c) based on these insights, the paper proposes a variant of MALM using a Laplace approximation (with additional approximations for the covariance matrix. The paper finally provides an evaluation on the mini ImageNet problem without significantly improving on the MAML results on the same task. Pro: - The topic is timely and of relevance to the ICLR community continuing a current trend in building meta-learning system for few-shot learning. - Provides valuable insight into the MAML objective and its relation to probabilistic models Con: - The paper is generally well-written but I find (as a non-meta-learner expert) that certain fundamental aspects could have been explained better or in more detail (see below for details). - The toy example is quite difficult to interpret the first time around and does not provide any empirical insight into the converge of the proposed method (compared to e.g. MAML) - I do not think the empirical results provide enough evidence that it is a useful/robust method. Especially it does not provide insight into which types of problems (small/large, linear/ non-linear) the method is applicable to. Detailed comments/questions: - The use of Laplace approximation is (in the paper) motivated from a probabilistic/Bayes and uncertainty point-of-view. It would, however, seem that the truncated iterations do not result in the approximation being very accurate during optimization as the truncation does not result in the approximation being created at a mode. Could the authors perhaps comment on: a) whether it is even meaningful to talk about the approximations as probabilistic distribution during the optimization (given the psd approximation to the Hessian), or does it only make sense after convergence? b) the consequence of the approximation errors on the general convergence of the proposed method (consistency and rate) - Sec 4.1, p5: Last equation: Perhaps useful to explain the term and why it is not in subroutine 4 . Should be ? - Sec 4.2: “A straightforward…”: I think it would improve readability to refer back to the to the previous equation (i.e. H) such that it is clear what is meant by “straightforward”. - Sec 4.2: Several ideas are being discussed in Sec 4.2 and it is not entirely clear to me what has actually been adopted here; perhaps consider formalizing the actual computations in Subroutine 4 – and provide a clearer argument (preferably proof) that this leads to consistent and robust estimator of .theta. - It is not clear from the text or experiment how the learning parameters are set. - Sec 5.1: It took some effort to understand exactly what was going on in the example and particular figure 5.1; e.g., in the model definition in the body text there is no mention of the NN mentioned/used in figure 5, the blue points are not defined in the caption, the terminology e.g. “pre-update density” is new at this point. I think it would benefit the readability to provide the reader with a bit more guidance. - Sec 5.1: While the qualitative example is useful (with a bit more text), I believe it would have been more convincing with a quantitative example to demonstrate e.g. the convergence of the proposal compared to std MAML and possibly compare to a std Bayesian inference method from the HB formulation of the problem (in the linear case) - Sec 5.2: The abstract clams increased performance over MAML but the empirical results do not seem to be significantly better than MAML ? I find it quite difficult to support the specific claim in the abstract from the results without adding a comment about the significance. - Sec 5.2: The authors have left out “Mishral et al” from the comparison due to the model being significantly larger than others. Could the authors provide insight into why they did not use the ResNet structure from the tcml paper in their L-MLMA scheme ? - Sec 6+7: The paper clearly states that it is not the aim to (generally) formulate the MAML as a HB. Given the advancement in gradient based inference for HB the last couple of years (e.g. variational, nested laplace , expectation propagation etc) for explicit models, could the authors perhaps indicate why they believe their approach of looking directly to the MAML objective is more scalable/useful than trying to formulate the same or similar objective in an explicit HB model and using established inference methods from that area ? Minor: - Sec 4.1 “…each integral in the sum in (2)…” eq 2 is a product", "role": "assistant" } ]
[ " Summary The paper presents an interesting view on the recently proposed MAML formulation of meta-learning (Finn et al).", "The main contribution is a) insight into the connection between the MAML procedure and MAP estimation in an equivalent linear hierarchical Bayes model with explicit priors, b) insight into the connection between MAML and MAP estimation in non-linear HB models with implicit priors, c) based on these insights, the paper proposes a variant of MALM using a Laplace approximation (with additional approximations for the covariance matrix.", "The paper finally provides an evaluation on the mini ImageNet problem without significantly improving on the MAML results on the same task.", "Pro: - The topic is timely and of relevance to the ICLR community continuing a current trend in building meta-learning system for few-shot learning.", "- Provides valuable insight into the MAML objective and its relation to probabilistic models Con: - The paper is generally well-written but I find (as a non-meta-learner expert) that certain fundamental aspects could have been explained better or in more detail (see below for details).", "- The toy example is quite difficult to interpret the first time around and does not provide any empirical insight into the converge of the proposed method (compared to e.g.", "MAML) - I do not think the empirical results provide enough evidence that it is a useful/robust method.", "Especially it does not provide insight into which types of problems (small/large, linear/ non-linear) the method is applicable to.", "Detailed comments/questions: - The use of Laplace approximation is (in the paper) motivated from a probabilistic/Bayes and uncertainty point-of-view.", "It would, however, seem that the truncated iterations do not result in the approximation being very accurate during optimization as the truncation does not result in the approximation being created at a mode.", "Could the authors perhaps comment on: a) whether it is even meaningful to talk about the approximations as probabilistic distribution during the optimization (given the psd approximation to the Hessian), or does it only make sense after convergence?", "b) the consequence of the approximation errors on the general convergence of the proposed method (consistency and rate) - Sec 4.1, p5: Last equation: Perhaps useful to explain the term and why it is not in subroutine 4 .", "Should be ?", "- Sec 4.2: “A straightforward…”: I think it would improve readability to refer back to the to the previous equation (i.e.", "H) such that it is clear what is meant by “straightforward”.", "- Sec 4.2: Several ideas are being discussed in Sec 4.2 and it is not entirely clear to me what has actually been adopted here; perhaps consider formalizing the actual computations in Subroutine 4 – and provide a clearer argument (preferably proof) that this leads to consistent and robust estimator of .theta.", "- It is not clear from the text or experiment how the learning parameters are set.", "- Sec 5.1: It took some effort to understand exactly what was going on in the example and particular figure 5.1; e.g., in the model definition in the body text there is no mention of the NN mentioned/used in figure 5, the blue points are not defined in the caption, the terminology e.g.", "“pre-update density” is new at this point.", "I think it would benefit the readability to provide the reader with a bit more guidance.", "- Sec 5.1: While the qualitative example is useful (with a bit more text), I believe it would have been more convincing with a quantitative example to demonstrate e.g.", "the convergence of the proposal compared to std MAML and possibly compare to a std Bayesian inference method from the HB formulation of the problem (in the linear case) - Sec 5.2: The abstract clams increased performance over MAML but the empirical results do not seem to be significantly better than MAML ?", "I find it quite difficult to support the specific claim in the abstract from the results without adding a comment about the significance.", "- Sec 5.2: The authors have left out “Mishral et al” from the comparison due to the model being significantly larger than others.", "Could the authors provide insight into why they did not use the ResNet structure from the tcml paper in their L-MLMA scheme ?", "- Sec 6+7: The paper clearly states that it is not the aim to (generally) formulate the MAML as a HB.", "Given the advancement in gradient based inference for HB the last couple of years (e.g.", "variational, nested laplace , expectation propagation etc) for explicit models, could the authors perhaps indicate why they believe their approach of looking directly to the MAML objective is more scalable/useful than trying to formulate the same or similar objective in an explicit HB model and using established inference methods from that area ?", "Minor: - Sec 4.1 “…each integral in the sum in (2)…” eq 2 is a product" ]
{ "criticism": 8, "example": 7, "importance_and_relevance": 7, "materials_and_methods": 18, "praise": 3, "presentation_and_reporting": 8, "results_and_discussion": 12, "suggestion_and_solution": 9, "total": 29 }
0.275862
0.241379
0.241379
0.62069
0.103448
0.275862
0.413793
0.310345
29
1
1
1
1
1
1
1
1
2.482759
-1.557543
4.040302
Toward learning better metrics for sequence generation training with policy gradient
Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN*s: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.
This paper describes an approach to generating time sequences by learning state-action values, where the state is the sequence generated so far, and the action is the choice of the next value. Local and global reward functions are learned from existing data sequences and then the Q-function learned from a policy gradient. Unfortunately, this description is a little vague, because the paper*s details are quite difficult to understand. Though the approach is interesting, and the experiments are promising, important explanation is missing or muddled. Perhaps most confusing is the loss function in equation 7, which is quite inadequately explained. This paper could be interesting, but substantial editing is needed before it is sufficient for publication.
iclr2018
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nToward learning better metrics for sequence generation training with policy gradient\nDesigning a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN*s: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper describes an approach to generating time sequences by learning state-action values, where the state is the sequence generated so far, and the action is the choice of the next value. Local and global reward functions are learned from existing data sequences and then the Q-function learned from a policy gradient. Unfortunately, this description is a little vague, because the paper*s details are quite difficult to understand. Though the approach is interesting, and the experiments are promising, important explanation is missing or muddled. Perhaps most confusing is the loss function in equation 7, which is quite inadequately explained. This paper could be interesting, but substantial editing is needed before it is sufficient for publication.", "role": "assistant" } ]
[ " This paper describes an approach to generating time sequences by learning state-action values, where the state is the sequence generated so far, and the action is the choice of the next value.", "Local and global reward functions are learned from existing data sequences and then the Q-function learned from a policy gradient.", "Unfortunately, this description is a little vague, because the paper*s details are quite difficult to understand.", "Though the approach is interesting, and the experiments are promising, important explanation is missing or muddled.", "Perhaps most confusing is the loss function in equation 7, which is quite inadequately explained.", "This paper could be interesting, but substantial editing is needed before it is sufficient for publication." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 2, "materials_and_methods": 2, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 6 }
0.5
0
0.333333
0.333333
0.333333
0.5
0.166667
0.166667
6
1
0
1
1
1
1
1
1
2.333333
1.559994
0.773339
Dynamic Evaluation of Neural Sequence Models
We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively.
The authors provide an improved implementation of the idea of dynamic evaluation, where the update of the parameters used in the last time step proposed in (Mikolov et al. 2010) is replaced with a back-propagation through the last few time steps, and uses RMSprop rather than vanilla SGD. The method is applied to word level and character level language modeling where it yields some gains in perplexity. The algorithm also appears able to perform domain adaptation, in a setting where a character-level language model trained mostly on English manages to quickly adapt to a Spanish test set. While the general idea is not novel, the implementation choices matter, and the authors provide one which appears to work well with recently proposed models. The character level experiments on the multiplicative LSTM make the most convincing point, providing a significant improvement over already good results on medium size data sets. Figure 2 also makes a strong case for the method*s suitability for applications where domain adaptation is important. The paper*s weakest part is the word level language modeling section. Given the small size of the data sets considered, the results provided are of limited use, especially since the development set is used to fit the RMSprop hyper-parameters. How sensitive are the final results to this choice? Comparing dynamic evaluation to neural cache models is a good idea, given how both depend en medium-term history: (Grave et al. 2017) provide results on the larger text8 and wiki103, it would be useful to see results for dynamic evaluation at least on the former. An indication of the actual additional evaluation time for word-level, char-level and sparse char-level dynamic evaluation would also be welcome. Pros: - Good new implementation of an existing idea - Significant perplexity gains on character level language modeling - Good at domain adaptation Cons: - Memory requirements of the method - Word-level language modeling experiments need to be run on larger data sets (Edit: the authors did respond satisfactorily to the original concern about the size of the word-level data set)
iclr2018
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDynamic Evaluation of Neural Sequence Models\nWe present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors provide an improved implementation of the idea of dynamic evaluation, where the update of the parameters used in the last time step proposed in (Mikolov et al. 2010) is replaced with a back-propagation through the last few time steps, and uses RMSprop rather than vanilla SGD. The method is applied to word level and character level language modeling where it yields some gains in perplexity. The algorithm also appears able to perform domain adaptation, in a setting where a character-level language model trained mostly on English manages to quickly adapt to a Spanish test set. While the general idea is not novel, the implementation choices matter, and the authors provide one which appears to work well with recently proposed models. The character level experiments on the multiplicative LSTM make the most convincing point, providing a significant improvement over already good results on medium size data sets. Figure 2 also makes a strong case for the method*s suitability for applications where domain adaptation is important. The paper*s weakest part is the word level language modeling section. Given the small size of the data sets considered, the results provided are of limited use, especially since the development set is used to fit the RMSprop hyper-parameters. How sensitive are the final results to this choice? Comparing dynamic evaluation to neural cache models is a good idea, given how both depend en medium-term history: (Grave et al. 2017) provide results on the larger text8 and wiki103, it would be useful to see results for dynamic evaluation at least on the former. An indication of the actual additional evaluation time for word-level, char-level and sparse char-level dynamic evaluation would also be welcome. Pros: - Good new implementation of an existing idea - Significant perplexity gains on character level language modeling - Good at domain adaptation Cons: - Memory requirements of the method - Word-level language modeling experiments need to be run on larger data sets (Edit: the authors did respond satisfactorily to the original concern about the size of the word-level data set)", "role": "assistant" } ]
[ " The authors provide an improved implementation of the idea of dynamic evaluation, where the update of the parameters used in the last time step proposed in (Mikolov et al.", "2010) is replaced with a back-propagation through the last few time steps, and uses RMSprop rather than vanilla SGD.", "The method is applied to word level and character level language modeling where it yields some gains in perplexity.", "The algorithm also appears able to perform domain adaptation, in a setting where a character-level language model trained mostly on English manages to quickly adapt to a Spanish test set.", "While the general idea is not novel, the implementation choices matter, and the authors provide one which appears to work well with recently proposed models.", "The character level experiments on the multiplicative LSTM make the most convincing point, providing a significant improvement over already good results on medium size data sets.", "Figure 2 also makes a strong case for the method*s suitability for applications where domain adaptation is important.", "The paper*s weakest part is the word level language modeling section.", "Given the small size of the data sets considered, the results provided are of limited use, especially since the development set is used to fit the RMSprop hyper-parameters.", "How sensitive are the final results to this choice?", "Comparing dynamic evaluation to neural cache models is a good idea, given how both depend en medium-term history: (Grave et al.", "2017) provide results on the larger text8 and wiki103, it would be useful to see results for dynamic evaluation at least on the former.", "An indication of the actual additional evaluation time for word-level, char-level and sparse char-level dynamic evaluation would also be welcome.", "Pros: - Good new implementation of an existing idea - Significant perplexity gains on character level language modeling - Good at domain adaptation Cons: - Memory requirements of the method - Word-level language modeling experiments need to be run on larger data sets (Edit: the authors did respond satisfactorily to the original concern about the size of the word-level data set)" ]
{ "criticism": 2, "example": 1, "importance_and_relevance": 3, "materials_and_methods": 12, "praise": 4, "presentation_and_reporting": 5, "results_and_discussion": 4, "suggestion_and_solution": 4, "total": 14 }
0.142857
0.071429
0.214286
0.857143
0.285714
0.357143
0.285714
0.285714
14
1
1
1
1
1
1
1
1
2.5
2.484218
0.015782
Stabilizing Adversarial Nets with Prediction Methods
Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points. This makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates.
This paper proposes a simple modification to the standard alternating stochastic gradient method for GAN training, which stabilizes training, by adding a prediction step. This is a clever and useful idea, and the paper is very well written. The proposed method is very clearly motivated, both intuitively and mathematically, and the authors also provide theoretical guarantees on its convergence behavior. I particularly liked the analogy with the damped harmonic oscillator. The experiments are well designed and provide clear evidence in favor of the usefulness of the proposed technique. I believe that the method proposed in this paper will have a significant impact in the area of GAN training. I have only one minor question: in the prediction step, why not use a step size, say , such that the "amount of predition" may be adjusted?
iclr2018
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nStabilizing Adversarial Nets with Prediction Methods\nAdversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points. This makes adversarial networks less likely to \"collapse,\" and enables faster training with larger learning rates.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a simple modification to the standard alternating stochastic gradient method for GAN training, which stabilizes training, by adding a prediction step. This is a clever and useful idea, and the paper is very well written. The proposed method is very clearly motivated, both intuitively and mathematically, and the authors also provide theoretical guarantees on its convergence behavior. I particularly liked the analogy with the damped harmonic oscillator. The experiments are well designed and provide clear evidence in favor of the usefulness of the proposed technique. I believe that the method proposed in this paper will have a significant impact in the area of GAN training. I have only one minor question: in the prediction step, why not use a step size, say , such that the \"amount of predition\" may be adjusted?", "role": "assistant" } ]
[ " This paper proposes a simple modification to the standard alternating stochastic gradient method for GAN training, which stabilizes training, by adding a prediction step.", "This is a clever and useful idea, and the paper is very well written.", "The proposed method is very clearly motivated, both intuitively and mathematically, and the authors also provide theoretical guarantees on its convergence behavior.", "I particularly liked the analogy with the damped harmonic oscillator.", "The experiments are well designed and provide clear evidence in favor of the usefulness of the proposed technique.", "I believe that the method proposed in this paper will have a significant impact in the area of GAN training.", "I have only one minor question: in the prediction step, why not use a step size, say , such that the \"amount of predition\" may be adjusted?" ]
{ "criticism": 0, "example": 0, "importance_and_relevance": 3, "materials_and_methods": 5, "praise": 3, "presentation_and_reporting": 2, "results_and_discussion": 2, "suggestion_and_solution": 1, "total": 7 }
0
0
0.428571
0.714286
0.428571
0.285714
0.285714
0.142857
7
0
0
1
1
1
1
1
1
2.285714
1.717547
0.568167
Interpretable and Pedagogical Examples
Teachers intentionally pick the most informative examples to show their students. However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable. We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies. We evaluate interpretability by (1) measuring the similarity of the teacher*s emergent strategies to intuitive strategies in each domain and (2) conducting human experiments to evaluate how effective the teacher*s strategies are at teaching humans. We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts.
This is a well written paper on a compelling topic: how to train "an automated teacher" to use intuitive strategies that would also apply to humans. The introduction is fairly strong, but this reviewer wishes that the authors would have come up with an intuitive example that illustrates why the strategy "1) train S on random exs; 2) train T to pick exs for S" makes sense. Such an example would dramatically improve the paper*s readability. The paper appears to be original, and the related work section is quite extensive. A second significant improvement would be to add an in-depth running example in section 3, so that the authors could illustrate why the BR strategy makes sense (Algorithm 2).
iclr2018
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInterpretable and Pedagogical Examples\nTeachers intentionally pick the most informative examples to show their students. However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable. We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies. We evaluate interpretability by (1) measuring the similarity of the teacher*s emergent strategies to intuitive strategies in each domain and (2) conducting human experiments to evaluate how effective the teacher*s strategies are at teaching humans. We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This is a well written paper on a compelling topic: how to train \"an automated teacher\" to use intuitive strategies that would also apply to humans. The introduction is fairly strong, but this reviewer wishes that the authors would have come up with an intuitive example that illustrates why the strategy \"1) train S on random exs; 2) train T to pick exs for S\" makes sense. Such an example would dramatically improve the paper*s readability. The paper appears to be original, and the related work section is quite extensive. A second significant improvement would be to add an in-depth running example in section 3, so that the authors could illustrate why the BR strategy makes sense (Algorithm 2).", "role": "assistant" } ]
[ " This is a well written paper on a compelling topic: how to train \"an automated teacher\" to use intuitive strategies that would also apply to humans.", "The introduction is fairly strong, but this reviewer wishes that the authors would have come up with an intuitive example that illustrates why the strategy \"1) train S on random exs; 2) train T to pick exs for S\" makes sense.", "Such an example would dramatically improve the paper*s readability.", "The paper appears to be original, and the related work section is quite extensive.", "A second significant improvement would be to add an in-depth running example in section 3, so that the authors could illustrate why the BR strategy makes sense (Algorithm 2)." ]
{ "criticism": 1, "example": 2, "importance_and_relevance": 1, "materials_and_methods": 0, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 1, "suggestion_and_solution": 3, "total": 5 }
0.2
0.4
0.2
0
0.6
0.6
0.2
0.6
5
1
1
1
0
1
1
1
1
2.8
1.789924
1.010076
TESLA: Task-wise Early Stopping and Loss Aggregation for Dynamic Neural Network Inference
For inference operations in deep neural networks on end devices, it is desirable to deploy a single pre-trained neural network model, which can dynamically scale across a computation range without comprising accuracy. To achieve this goal, Incomplete Dot Product (IDP) has been proposed to use only a subset of terms in dot products during forward propagation. However, there are some limitations, including noticeable performance degradation in operating regions with low computational costs, and essential performance limitations since IDP uses hand-crafted profile coefficients. In this paper, we extend IDP by proposing new training algorithms involving a single profile, which may be trainable or pre-determined, to significantly improve the overall performance, especially in operating regions with low computational costs. Specifically, we propose the Task-wise Early Stopping and Loss Aggregation (TESLA) algorithm, which is showed in our 3-layer multilayer perceptron on MNIST that outperforms the original IDP by 32\% when only 10\% of dot products terms are used and achieves 94.7\% accuracy on average. By introducing trainable profile coefficients, TESLA further improves the accuracy to 95.5\% without specifying coefficients in advance. Besides, TESLA is applied to the VGG-16 model, which achieves 80\% accuracy using only 20\% of dot product terms on CIFAR-10 and also keeps 60\% accuracy using only 30\% of dot product terms on CIFAR-100, but the original IDP performs like a random guess in these two datasets at such low computation costs. Finally, we visualize the learned representations at different dot product percentages by class activation map and show that, by applying TESLA, the learned representations can adapt over a wide range of operation regions.
This paper presents a modification of a numeric solution: Incomplete Dot Product (IDP), that allows a trained network to be used under different hardware constraints. The IDP method works by incorporating a *coefficient* to each layer (fully connected or convolution), which can be learned as the weights of the model are being optimized. These coefficients can be used to prune subsets of the nodes or filters, when hardware has limited computational capacity. The original IDP method (cited in the paper) is based on iteratively training for higher hardware capacities. This paper improves upon the limitation of the original IDP by allowing the weights of the network be trained concurrently with these coefficients, and authors present a loss function that is linear combination of loss function under original or constrained network setting. They also present results for a *harmonic* combination which was not explained in the paper at all. Overall the paper has very good motivation and significance. However the writing is not very clear and the paper is not self-contained at all. I was not able to understand the significance of early stopping and how this connects with loss aggregation, and how the learning process differs from the original IDP paper, if they also have a scheduled learning setting. Additionally, there were several terms that were unexplained in this paper such as *harmonic* method highlighted in Figure 3. As is, while results are promising, I can*t fully assess that the paper has major contributions.
iclr2018
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTESLA: Task-wise Early Stopping and Loss Aggregation for Dynamic Neural Network Inference\nFor inference operations in deep neural networks on end devices, it is desirable to deploy a single pre-trained neural network model, which can dynamically scale across a computation range without comprising accuracy. To achieve this goal, Incomplete Dot Product (IDP) has been proposed to use only a subset of terms in dot products during forward propagation. However, there are some limitations, including noticeable performance degradation in operating regions with low computational costs, and essential performance limitations since IDP uses hand-crafted profile coefficients. In this paper, we extend IDP by proposing new training algorithms involving a single profile, which may be trainable or pre-determined, to significantly improve the overall performance, especially in operating regions with low computational costs. Specifically, we propose the Task-wise Early Stopping and Loss Aggregation (TESLA) algorithm, which is showed in our 3-layer multilayer perceptron on MNIST that outperforms the original IDP by 32\\% when only 10\\% of dot products terms are used and achieves 94.7\\% accuracy on average. By introducing trainable profile coefficients, TESLA further improves the accuracy to 95.5\\% without specifying coefficients in advance. Besides, TESLA is applied to the VGG-16 model, which achieves 80\\% accuracy using only 20\\% of dot product terms on CIFAR-10 and also keeps 60\\% accuracy using only 30\\% of dot product terms on CIFAR-100, but the original IDP performs like a random guess in these two datasets at such low computation costs. Finally, we visualize the learned representations at different dot product percentages by class activation map and show that, by applying TESLA, the learned representations can adapt over a wide range of operation regions.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents a modification of a numeric solution: Incomplete Dot Product (IDP), that allows a trained network to be used under different hardware constraints. The IDP method works by incorporating a *coefficient* to each layer (fully connected or convolution), which can be learned as the weights of the model are being optimized. These coefficients can be used to prune subsets of the nodes or filters, when hardware has limited computational capacity. The original IDP method (cited in the paper) is based on iteratively training for higher hardware capacities. This paper improves upon the limitation of the original IDP by allowing the weights of the network be trained concurrently with these coefficients, and authors present a loss function that is linear combination of loss function under original or constrained network setting. They also present results for a *harmonic* combination which was not explained in the paper at all. Overall the paper has very good motivation and significance. However the writing is not very clear and the paper is not self-contained at all. I was not able to understand the significance of early stopping and how this connects with loss aggregation, and how the learning process differs from the original IDP paper, if they also have a scheduled learning setting. Additionally, there were several terms that were unexplained in this paper such as *harmonic* method highlighted in Figure 3. As is, while results are promising, I can*t fully assess that the paper has major contributions.", "role": "assistant" } ]
[ " This paper presents a modification of a numeric solution: Incomplete Dot Product (IDP), that allows a trained network to be used under different hardware constraints.", "The IDP method works by incorporating a *coefficient* to each layer (fully connected or convolution), which can be learned as the weights of the model are being optimized.", "These coefficients can be used to prune subsets of the nodes or filters, when hardware has limited computational capacity.", "The original IDP method (cited in the paper) is based on iteratively training for higher hardware capacities.", "This paper improves upon the limitation of the original IDP by allowing the weights of the network be trained concurrently with these coefficients, and authors present a loss function that is linear combination of loss function under original or constrained network setting.", "They also present results for a *harmonic* combination which was not explained in the paper at all.", "Overall the paper has very good motivation and significance.", "However the writing is not very clear and the paper is not self-contained at all.", "I was not able to understand the significance of early stopping and how this connects with loss aggregation, and how the learning process differs from the original IDP paper, if they also have a scheduled learning setting.", "Additionally, there were several terms that were unexplained in this paper such as *harmonic* method highlighted in Figure 3.", "As is, while results are promising, I can*t fully assess that the paper has major contributions." ]
{ "criticism": 5, "example": 1, "importance_and_relevance": 2, "materials_and_methods": 6, "praise": 2, "presentation_and_reporting": 3, "results_and_discussion": 4, "suggestion_and_solution": 1, "total": 11 }
0.454545
0.090909
0.181818
0.545455
0.181818
0.272727
0.363636
0.090909
11
1
1
1
1
1
1
1
1
2.181818
2.118688
0.06313
Iterative temporal differencing with fixed random feedback alignment support spike-time dependent plasticity in vanilla backpropagation for deep learning
In vanilla backpropagation (VBP), activation function matters considerably in terms of non-linearity and differentiability. Vanishing gradient has been an important problem related to the bad choice of activation function in deep learning (DL). This work shows that a differentiable activation function is not necessary any more for error backpropagation. The derivative of the activation function can be replaced by an iterative temporal differencing (ITD) using fixed random feedback weight alignment (FBA). Using FBA with ITD, we can transform the VBP into a more biologically plausible approach for learning deep neural network architectures. We don*t claim that ITD works completely the same as the spike-time dependent plasticity (STDP) in our brain but this work can be a step toward the integration of STDP-based error backpropagation in deep learning.
The paper is incomplete and nowhere near finished, it should have been withdrawn. The theoretical results are presented in a bitmap figure and only referred to in the text (not explained), and the results on datasets are not explained either (and pretty bad). A waste of my time.
iclr2018
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIterative temporal differencing with fixed random feedback alignment support spike-time dependent plasticity in vanilla backpropagation for deep learning\nIn vanilla backpropagation (VBP), activation function matters considerably in terms of non-linearity and differentiability. Vanishing gradient has been an important problem related to the bad choice of activation function in deep learning (DL). This work shows that a differentiable activation function is not necessary any more for error backpropagation. The derivative of the activation function can be replaced by an iterative temporal differencing (ITD) using fixed random feedback weight alignment (FBA). Using FBA with ITD, we can transform the VBP into a more biologically plausible approach for learning deep neural network architectures. We don*t claim that ITD works completely the same as the spike-time dependent plasticity (STDP) in our brain but this work can be a step toward the integration of STDP-based error backpropagation in deep learning.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper is incomplete and nowhere near finished, it should have been withdrawn. The theoretical results are presented in a bitmap figure and only referred to in the text (not explained), and the results on datasets are not explained either (and pretty bad). A waste of my time.", "role": "assistant" } ]
[ " The paper is incomplete and nowhere near finished, it should have been withdrawn.", "The theoretical results are presented in a bitmap figure and only referred to in the text (not explained), and the results on datasets are not explained either (and pretty bad).", "A waste of my time." ]
{ "criticism": 3, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 1, "praise": 0, "presentation_and_reporting": 2, "results_and_discussion": 1, "suggestion_and_solution": 1, "total": 3 }
1
0
0
0.333333
0
0.666667
0.333333
0.333333
3
1
0
0
1
0
1
1
1
2.666667
1.088424
1.578243
Character Level Based Detection of DGA Domain Names
Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Little studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and less prone to overfitting.
This paper proposes to automatically recognize domain names as malicious or benign by deep networks (convnets and RNNs) trained to directly classify the character sequence as such. Pros The paper addresses an important application of deep networks, comparing the performance of a variety of different types of model architectures. The tested networks seem to perform reasonably well on the task. Cons There is little novelty in the proposed method/models -- the paper is primarily focused on comparing existing models on a new task. The descriptions of the different architectures compared are overly verbose -- they are all simple standard convnet / RNN architectures. The code specifying the models is also excessive for the main text -- it should be moved to an appendix or even left for a code release. The comparisons between various architectures are not very enlightening as they aren’t done in a controlled way -- there are a large number of differences between any pair of models so it’s hard to tell where the performance differences come from. It’s also difficult to compare the learning curves among the different models (Fig 1) as they are in separate plots with differently scaled axes. The proposed problem is an explicitly adversarial setting and adversarial examples are a well-known issue with deep networks and other models, but this issue is not addressed or analyzed in the paper. (In fact, the intro claims this is an advantage of not using hand-engineered features for malicious domain detection, seemingly ignoring the literature on adversarial examples for deep nets.) For example, in this case an attacker could start with a legitimate domain name and use black box adversarial attacks (or white box attacks, given access to the model weights) to derive a similar domain name that the models proposed here would classify as benign. While this paper addresses an important problem, in its current form the novelty and analysis are limited and the paper has some presentation issues.
iclr2018
OpenReview
<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
[ { "content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCharacter Level Based Detection of DGA Domain Names\nRecently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Little studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and less prone to overfitting.", "role": "user" }, { "content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes to automatically recognize domain names as malicious or benign by deep networks (convnets and RNNs) trained to directly classify the character sequence as such. Pros The paper addresses an important application of deep networks, comparing the performance of a variety of different types of model architectures. The tested networks seem to perform reasonably well on the task. Cons There is little novelty in the proposed method/models -- the paper is primarily focused on comparing existing models on a new task. The descriptions of the different architectures compared are overly verbose -- they are all simple standard convnet / RNN architectures. The code specifying the models is also excessive for the main text -- it should be moved to an appendix or even left for a code release. The comparisons between various architectures are not very enlightening as they aren’t done in a controlled way -- there are a large number of differences between any pair of models so it’s hard to tell where the performance differences come from. It’s also difficult to compare the learning curves among the different models (Fig 1) as they are in separate plots with differently scaled axes. The proposed problem is an explicitly adversarial setting and adversarial examples are a well-known issue with deep networks and other models, but this issue is not addressed or analyzed in the paper. (In fact, the intro claims this is an advantage of not using hand-engineered features for malicious domain detection, seemingly ignoring the literature on adversarial examples for deep nets.) For example, in this case an attacker could start with a legitimate domain name and use black box adversarial attacks (or white box attacks, given access to the model weights) to derive a similar domain name that the models proposed here would classify as benign. While this paper addresses an important problem, in its current form the novelty and analysis are limited and the paper has some presentation issues.", "role": "assistant" } ]
[ " This paper proposes to automatically recognize domain names as malicious or benign by deep networks (convnets and RNNs) trained to directly classify the character sequence as such.", "Pros The paper addresses an important application of deep networks, comparing the performance of a variety of different types of model architectures.", "The tested networks seem to perform reasonably well on the task.", "Cons There is little novelty in the proposed method/models -- the paper is primarily focused on comparing existing models on a new task.", "The descriptions of the different architectures compared are overly verbose -- they are all simple standard convnet / RNN architectures.", "The code specifying the models is also excessive for the main text -- it should be moved to an appendix or even left for a code release.", "The comparisons between various architectures are not very enlightening as they aren’t done in a controlled way -- there are a large number of differences between any pair of models so it’s hard to tell where the performance differences come from.", "It’s also difficult to compare the learning curves among the different models (Fig 1) as they are in separate plots with differently scaled axes.", "The proposed problem is an explicitly adversarial setting and adversarial examples are a well-known issue with deep networks and other models, but this issue is not addressed or analyzed in the paper.", "(In fact, the intro claims this is an advantage of not using hand-engineered features for malicious domain detection, seemingly ignoring the literature on adversarial examples for deep nets.)", "For example, in this case an attacker could start with a legitimate domain name and use black box adversarial attacks (or white box attacks, given access to the model weights) to derive a similar domain name that the models proposed here would classify as benign.", "While this paper addresses an important problem, in its current form the novelty and analysis are limited and the paper has some presentation issues." ]
{ "criticism": 7, "example": 1, "importance_and_relevance": 4, "materials_and_methods": 11, "praise": 3, "presentation_and_reporting": 3, "results_and_discussion": 0, "suggestion_and_solution": 2, "total": 12 }
0.583333
0.083333
0.333333
0.916667
0.25
0.25
0
0.166667
12
1
1
1
1
1
1
0
1
2.583333
2.567551
0.015782